id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
237583626
pes2o/s2orc
v3-fos-license
Genetic Diversity of SARS-CoV-2 among Travelers Arriving in Hong Kong We sequenced 10% of imported severe acute respiratory syndrome coronavirus 2 infections detected in travelers to Hong Kong and revealed the genomic diversity of regions of origin, including lineages not previously reported from those countries. Our results suggest that international or regional travel hubs might be useful surveillance sites to monitor sequence diversity. DISPATCHES H ong Kong uses an elimination strategy to control coronavirus disease (COVID-19) that includes stringent travel restrictions to reduce the risk of introducing severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) into local communities (1). COVID-19 testing was mandated on departure and arrival for all inbound travelers. Compulsory 14-day home quarantine was put in place for all arrivals beginning March 19, 2020. Nonresidents were banned from entry after March 25. In subsequent months, persons arriving from high-risk locations were required to quarantine in hotels; by November, all arrivals had to quarantine in hotels. On December 25, the quarantine period was extended to 21 days. Predeparture COVID-19 testing was mandated for travelers inbound from high-risk locations. Furthermore, daily health declarations were required from all quarantined travelers and respiratory samples were collected on arrival, day 12, and day 19 (for 21-day quarantine) for reverse transcription PCR (RT-PCR) testing. As of April 25, 2021, authorities had recorded 11,731 RT-PCR-positive COVID-19 cases in Hong Kong. About 20% (2,350) of the laboratory-confi rmed COVID-19 cases were considered imported, detected in persons thought to have been infected outside of Hong Kong. Here, we report the analyses of 10% of these imported cases through genome sequencing. To estimate the viral sequence diversity among these imported cases, we performed next-generation Genetic Diversity of SARS-CoV-2 among Travelers Arriving in Hong Kong We sequenced 10% of imported severe acute respiratory syndrome coronavirus 2 infections detected in travelers to Hong Kong and revealed the genomic diversity of regions of origin, including lineages not previously reported from those countries. Our results suggest that international or regional travel hubs might be useful surveillance sites to monitor sequence diversity. sequencing on 10% (221) of clinical samples collected (2,3) (Appendix). We selected a greater proportion of samples (204) beginning in June 2020 when greater genetic diversity began to appear globally. The number of samples we sequenced by country of origin was proportional to all cases detected in travelers from that country (R = 0.91). Using the Pangolin classification system Table 2). We detected 2 variants of concern (VOC) and 3 variants of interest (VOI; Table 1) (5). VOC B.1.1.7 (Alpha variant), which began spreading rapidly in the United Kingdom in November 2020 (6,7), was the most common VOC (39) in our study. We first detected this lineage in a passenger arriving from the United Kingdom on December 13, 2020, and we subsequently detected it in another 38 travelers from other countries, predominantly from the Philippines and Pakistan (Table 1). This finding corresponds with data from global surveillance that indicate this lineage has been circulating over a wide geographic range beginning in December 2020. The second VOC, B.1.351 (Beta), which was first reported to circulate widely in South Africa beginning in November 2020 (8) Fifty percent of our cases were imported from 5 middle-income countries in Asia: India, Indonesia, Nepal, Pakistan, and the Philippines (https://databank.worldbank.org/data/download/site-content/ CLASS.xls; Appendix Table 1). We wanted to compare the genomic diversity of SARS-CoV-2 imported from these countries with those reported in the GI-SAID database (https://www.gisaid.org). However, the Philippines, Nepal, and Pakistan had limited SARS-CoV-2 sequence information in the GISAID database (Table 2) (9). Of the 3 VOC or VOI we identified in travelers from the Philippines (Table 2), B.1.351 was not among sequences the Philippines submitted to GISAID, but the March 6-20, 2021, arrival dates of the 5 case-patients with B.1.351 suggest unreported domestic circulation of that lineage. Similarly, Nepal had reported to GISAID only 15 of the 20 viral sequences from 8 lineages we had identified. Other countries also had not previously reported several lineages we identified to GISAID, including 3 from India and 1 each from Pakistan and Indonesia. We did not analyze samples from travelers from some countries, either because they had their own extensive domestic sequencing efforts or we had few samples from these countries (<5 per country). We further compared GISAID data with our data from the Philippines, Nepal, and Pakistan. We retrieved the earliest collection date for each lineage we detected that these countries had also reported to GI-SAID; some of those dates were close to the first dates of arrival for case-patients with those lineages in our study. In fact, in over half of those lineages reported in both sources, we identified the lineage either before or <1 month after it was reported by the country (Appendix Table 3), highlighting the potential use of this method of surveillance to assess genomic diversity in regions with limited sequence information. The emergence of VOC and VOI in different geographic locations highlights the need for global-level genomic surveillance of SARS-CoV-2 (10), but genomic sequencing information from some regions remains incomplete. Our findings suggest that SARS-CoV-2 in Travelers to Hong Kong travel hubs such as Hong Kong can be used as surveillance sites to identify infected travelers from regions with widespread circulation of lineages of interest. Such indirect surveillance might provide useful data to partially reveal virus diversity in countries with limited sequence information, leading to better preparedness for and response to newly emerging SARS-CoV-2 variants. However, findings from these indirect analyses are likely to be only partial and skewed by the level of passenger traffic to destination countries from various points of departure. Also, the extent of different virus lineages circulating in a country of departure may have affected our observations; lineages that circulate at a low level in a country of interest might be missed by our current strategy. Optimizing this approach, such as by directing sequencing efforts toward travelers departing from targeted countries or regions rather than at the points of arrival, might help overcome those limitations.
2021-09-22T06:17:08.324Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "d171277e54ee1e757395b22b6d1e330cfba69b45", "oa_license": "CCBY", "oa_url": "https://wwwnc.cdc.gov/eid/article/27/10/pdfs/21-1028.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a1fe60224d4aaa1d72e2ed96853b5501f4f9090", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17566414
pes2o/s2orc
v3-fos-license
IL-17 producing T cells correlate with polysensitization but not with bronchial hyperresponsiveness in patients with allergic rhinitis Background Th2-type T cell response has a considerable role in atopic diseases. The involvement of Th17 and IL-17 in atopy process provided new understanding of allergic diseases. Bronchial hyperresponsiveness is quite common in allergic rhinitis. We aimed to explore the expression of IL-17 producing CD3+ CD4+ T cells in peripheral blood of rhinitic patients, with/without bronchial hyperresponsiveness and sensitized to common allergens, as this relationship has not been examined. Methods Sixty one patients with allergic rhinitis and thirty controls were examined. IL-17 producing T cells were detected by flow cytometry, IL-17, IL-4 and IL-13 levels in peripheral blood were evaluated by ELISA. Bronchial hyperresponsiveness was investigated with methacholine challenge test. Atopy was evaluated by skin prick tests with common allergens. Results IL-17 producing T cell percentage of AR group was significantly higher: 2.59 ± 1.32 than in controls 1.24 ± 0.22, (p = 0.001). Significant sex related difference in CD3+ CD4+ IL-17 T cells was observed: respectively in male patients versus female 3.15 ± 1.8% and 2.31 ± 0.9%, (p = 0.02). Rhinitics had greater bronchodilator responses compared to controls (p = 0.001), however the percentages of T cells in both groups appeared equal. Serum IL-17 levels in AR group were significantly higher (5.10 ± 4.40) pg/ml than in controls (3.46 ±1.28) pg/ml, (p = 0.04). IL-4 levels (0.88 ± 1.27) and IL-13 levels (3.14 ± 5.85) in patients were significantly higher than in control’s (0.54 ± 0.10) pg/ml, (p = 0.001) and (1.19 ± 0.64) pg/ml; (p = 0.001) respectively. The percentages of T cells in patients sensitized to 5 allergens (group I) were significantly lower (1.91 ± 0.62) than those sensitized to more than 5 allergens (group II) (2.91 ± 1.5) (p = 0.004). Conclusions The observed higher levels of IL-17 producing T cells in polysensitized males suggest a role of IL-17 in pathogenesis of AR. The higher airway responsiveness in AR may not be Th17 dependent. The higher serum values of IL-17, IL-4 and IL-13 demonstrate the presence of cytokine balance in atopic diseases. Background A trend of increasing prevalence of allergic rhinitis (AR) worldwide keeps the interest in investigation of pathogenesis of the disease. Th2 cells are suspected as the key culprit in the pathophysiology of atopic disorders [1]. Interleukin 17A (IL-17A), commonly known as IL-17, is produced by T helper 17 (Th17) subset of CD4 + T cells. Recent progress in Th17 cells knowledge suggests a pathological role of IL-17 in the allergic responses [2]. However, the involvement of Th17 cells in AR has not been clearly examined [3]. Th2 cells produce IL-4 and IL-13 and mediate allergic responses and these cytokines have been extensively studied as key players in the atopic airway diseases. The roles of IL-4 in IgE production and IL-13 in bronchial hyperresponsiveness (BHR) and tissue remodeling are evident [1]. Th17 cells and IL-17 as a major family cytokine are usually associated with autoimmune reaction or neutrophil inflammation. Nevertheless it has been demonstrated that allergic sensitization through the airway promotes strong Th17 response and acute BHR in mouse model of asthma [4]. Asymptomatic airway hyperreactivity is common in people with AR, but the immunological mechanisms have to be defined. There are no data presenting the intimate role of Th17 cells in these events. Based on this, we hypothesized that IL-17 may mediate airway hyperresponsiveness in AR patients. The present study examined the immunologic characteristics of sensitized patients with allergic rhinitis with/ without bronchial hyperesponsiveness in comparison with healthy controls. Subjects Sixty one subjects (41 females and 20 males, mean age 36.25 ± 5 years) with persistent moderate-to-severe allergic rhinitis and thirty healthy controls (18 females and 12 males, mean age 35.73 ± 6 years) were included in the study. The subjects were selected by detailed clinical history of nasal obstruction, and/or rhinorrhea, sneezing, itching and sensitization to one perennial allergen at least. Selected patients experienced these symptoms since at least two years. Exclusion criteria were: bronchial asthma, chronic rhinosinuitis, nasal polyposis, excessive septal deviation and current smoking. Blood samples for assessment of cytokines, IL-17 producing T cells and peripheral eosinophil counts were taken from patients and controls. The study was approved by the ethic committee of MU-Pleven, Bulgaria. Informed consent was obtained from all the subjects. Atopy assesment The skin prick tests (SPT) were performed to patients and controls according to the method of Pepys [5]. Direct bronchial challenge testing with methacholine Methacholine provocation test was performed according to the 2-min tidal breathing method and inhalation of an aerosol from a nebulizer at an output of 0.13 mL/min was used. Saline was used as control. The concentrations of methacholine from 0.03 to 16 mg/ml [6] were used. Forced expiratory volume for one sec (FEV1) was measured at 30 and 90 sec after completed two minutes inhalation preceded by baseline spirometry. An acceptablequality FEV1 was obtained at each time point. The test was discontinued when a drop of 20% in FEV1 occurred. The protocol included a post-bronchodilator test. The patients and controls were consistent with the following criteria: nonsmokers and FEV1 ≥80% of predicted values before testing. A modified version of English Wright nebulizer and Spirometer "Spirovit sp-10" (Schiller, Switzerland) were used. Expression of IL-17A in peripheral blood T lymphocytes Blood sample collection and PMBC culture Blood samples from studied subjects were collected in EDTA vacutainer (Becton Dickinson, USA) and used for peripheral blood mononuclear cells (PBMC) isolation. PMBC were obtained by centrifugation on Ficoll-Paque Plus (GE Healthcare, Sweden) and resuspended in 10% FBS-RPMI 1640 (Invitrogen). Cells were cultured in presence of 50 ng/ml Phorbol Myristate Acetate (PMA) (Sigma) for 5 hours at 37°C. After the first hour of incubation a protein transport inhibitior (GolgiPlug, Becton Dickinson) was added followed by consequent incubation and washing procedures. Intracellular staining of IL-17A cytokine Cells were stained for cell surface markers with Anti-Human CD3 PerCP (Becton Dickinson) and MAB FITCanti-human CD4 (Becton Dickinson) followed by fixation and permeabilization with Cytofix/Cytoperm. MAB PE anti-human IL-17A (Becton Dickinson Pharmingen) for the intracellular cytokine staining was used. After washing with PBS and 1% formaldehyde fixation, blood samples were stored at 40C prior to flow cytometry. Isotype control was included in each assay. Flow cytometer FAC-Sort model (Becton Dickinson) was used. Results were expressed in percentages. In vitro determination of IL-4, IL-13 and IL-17 Specimen collection Five ml of peripheral blood were collected using pyrogen/ endotoxin free collecting tubes from all subjects (patients and controls). Serum was removed rapidly and carefully from the red cells after clotting followed by centrifuge at approximately 1000 × g for 10 min. IL-4, IL-13 and IL-17 levels in serum were analyzed using commercially available pre-coated enzyme-linkedimmunosorbent -assay (ELISA) kits (Diaclone SAS, France). All reagents were brought to room temperature before use. Preparation of Wash Buffers, Standard Diluent Buffers and Controls followed the instructions in the manufacturer's manual. After the final preparation of all reagents, the average absorbance values for each set of triplicate standards, controls and samples were calculated within 20% of the mean. A linear standard curve was generated. The amounts of IL-4, IL-13 and IL-17 in each sample were determined by extrapolating OD values using a standard curve. The lower limit of detection was <0.54 pg/ml for IL-4, <1.17 pg/ml for IL-13 and <3.2 pg/ml for IL-17. Statistical analysis Variables with normal distribution were expressed as mean and standard deviation. For comparison of two independent groups the nonparametric Mann-Whitney U test was used. Statistical analysis was performed using SPSS (version 19.1). The results were expressed as mean ± standard deviation (SD). P <0.05 was considered as significant. Demographic characteristics of study population A total of 91 subjects (61 patients and 30 controls) age matched were included in the study. Demographic data of AR group and controls are presented on Table 1, as well as eosinophil (Eo) count and BHR. IL-17 producing T cells in peripheral blood were higher in AR patients Average percentages of IL-17 producing T cells determined with flow cytometer were between 0.57 to 1.84% in healthy subjects and 1.34 to 6.84% in the patients (Figure 1). IL-17 producing T cells were higher in males in both groups of rhinitics and controls The gender related experimental outcome was assessed. In the study, 64.8% of the subjects were women. The percentages of CD3 + CD4 + IL-17 T cells were significantly higher in male patients: 3.15 ± 1.81 versus 2.31 ± 0.91 in females (p = 0.02) ( Figure 3). Table 2 presents the relationship between average percentages of Th17 expressing IL-17 and different perennial (indoor) allergens in patients and controls. Table 3 shows identical correlation with outdoor allergens (Table 3). IL-17 expression was dependent on sensitization rate In terms of skin-prick test results patients were assigned as: group I -sensitized to five allergens and group II -sensitized to more than 5 allergens. The average of IL-17 producing T cells percentages in two groups was significantly different: 1.91 ± 0.62% for group one and 2.91 ± 1.5% for group two (p = 0.004) (Figure 4). Increased levels of IL-4, IL-13 and IL-17 in AR patients To elucidate the role of IL-17 in allergic inflammation in parallel with the classical participants (IL-4 and IL-13), all of them (IL-4, IL-13 and IL-17) were analyzed by ELISA method in the patients and controls. The mean values in healthy subjects were within the normal ranges closely related to the available in the manufacturer's manual. The mean levels in healthy controls were: 0.54 pg/ml (IL-4), 1.17 pg/ml (IL-13), and 3.2 pg/ml for (IL-17) as measured in our laboratory. The obtained values above these characters were accepted as pathological ( Figure 5). The average values for IL-4, IL-13 and IL-17 levels were higher in patients compared to controls. IL-4 levels were significantly higher (0.88 ± 1.27) pg/ml in patients versus (0.54 ± 0.10) pg/ml in controls, (p = 0.001). There was a significant difference between AR group and healthy subjects for all investigated interleukins. Table 4 presents the association between each aeroallergen causing symptoms throughout the year and serum levels of IL-4, IL-13 and IL-17. The perennial allergens of house dust mites correlate significantly with the three interleukins. Animal feather allergen is strongly associated with IL-4 and IL-13. The data of blood levels of the studied variables (IL-4, IL-13 and IL-17) and outdoor allergens are shown in Table 5. The average mean values of IL-4 showed significantly higher levels in patients sensitized to birch and beech pollens. IL-13 blood concentrations were significantly higher in beech sensitized rhinitics. There was no significant difference in IL-17 serum levels for outdoor allergens. Patients who were sensitized to birch and beech allergens showed significantly higher values of IL-4. IL-13 was significantly higher in patients with positive SPT to beech pollen. Allergic rhinitis patients were hyperresponsive to methacholine independently of IL-17 producing T cells In comparison with healthy subjects, those with AR had a higher degree of bronchial responsiveness to methacholine challenge 67.2% versus 23.3% respectively (p = 0.001). The average percentages of IL-17 producing T cells (CD3 + CD4 + ) in methacholine positive and methacholine negative rhinitics appeared equal (p > 0.05). Discussion This study shows that increased levels of IL-17 producing T cells may play role in the allergic inflammation in patients with AR. Our observations suggest that IL-17 production might be triggered by the increasing number of sensitizations. In the pathophysiology of allergic rhinitis, a sensitization process includes activation of allergen-specific T cells commonly presented by Th2 subtype. The specific T cells stimulate the production of allergen-specific IgE which is the essential issue in AR [7]. A recent study on IgE production in human B cells found that IL-17 could induce B cells switching to IgE which implies Th17 involvement in the atopic phenomenon [8]. The role of IL-17 producing cells as the subject of research in the field of allergic sensitization is essential not only for increasing our understanding of the AR mechanism. The process of polysensitization is associated with substantial impact on quality of life of rhinitis patients [9]. It has been reported that polysensitized subjects experienced more severe symptoms than monosensitized ones [9] and frequently presented associated asthma. These clinical implications make the recognition of IL-17 as being important in the pathogenesis of allergic rhinitis. Furthermore, we examined IL-17 dependent BHR in AR patients for the first time. The results obtained show that subjects with BHR to methacholine exhibit similar percentages of IL-17 producing T cells as methacholine negative patients. Yet, the subjects with asymptomatic BHR differed significantly from healthy controls. It would be important to follow up the AR patients with BHR aiming to understand are they at greater risk of developing IL-17 dependent asthma symptoms within the succeeding few years. The role of IL-13 in mediating lower airway inflammation and BHR following isolated allergic sensitization of the upper airways was suggested in a recent study of Wang et al. [10]. The authors highlighted the role of interleukins in the communication between the upper and lower airways. Notwithstanding the recent evidence suggesting IL-17 contribution to BHR [11], raised new and complex questions. Data for IL-17 mediated allergic rhinitis in humans are quite limited compared to murine experimental models. Moreover, recent studies have been demonstrated that cytokine setting inducing Th17 differentiation in humans and mice is different [12]. The same authors state the role of IL-17 producing cells in allergic reactions as "largely unclear". In 2010 for the first time IL-17 have been found to play a causative role in airway remodeling in asthmatic mouse model [13]. But the idea on participation of IL-17 in upper airway tissue remodeling is not clear and is still controversial. Furthermore, we demonstrate that rhinitic subjects display high levels of IL-4, IL-13 and IL-17 in peripheral blood. IL-4 and IL-13 are key players in the allergic response [1] as it was confirmed in our study also. Despite missing information on Th2 expression in this study, both IL-4 and IL-13 served as important indicators of Th2 expression. Recent studies demonstrated that both Th2 and Th17 are involved in the pathogenesis of allergic airway inflammation through releasing specific cytokines [14]. IL-13 is essential for the development of a late nasal response to allergen challenge [15] while IL-4 plays an important role in the early Th2 inflammatory response [1]. The Th17 subset discovery revised the Th1/Th2 paradigm and enhanced the understanding of heterogeneous nature of allergic disease [1]. The limitation of this study is relatively small number of control subjects experienced to methacholine and skin prick tests, which could be overcome in a future study. Conclusions The higher levels of IL-17 producing CD3 + CD4 + T cells in peripheral blood of polysensitized allergic rhinitis patients were demonstrated. The increased BHR in AR subjects and their airway response was suggested to enhance upon natural exposure to multiple allergens to which the subjects were sensitized. The role in allergic inflammation of Th2 associated cytokines was confirmed. Settings of high levels of IL-4, IL-13 and IL-17 in AR subjects on multiple allergen exposure support the idea of the heterogeneous nature of allergic disease and the role of Th1/Th2/Th17 balance in the pathogenesis of AR.
2018-04-03T05:45:00.152Z
2014-01-15T00:00:00.000
{ "year": 2014, "sha1": "67746cf9b210827fec2587a75161f950dafbe79a", "oa_license": "CCBY", "oa_url": "https://ctajournal.biomedcentral.com/track/pdf/10.1186/2045-7022-4-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd1f32d7b8cb3aa4fedefd13ebf24e0c49690ab7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15653345
pes2o/s2orc
v3-fos-license
Characterizing and Avoiding Routing Detours Through Surveillance States An increasing number of countries are passing laws that facilitate the mass surveillance of Internet traffic. In response, governments and citizens are increasingly paying attention to the countries that their Internet traffic traverses. In some cases, countries are taking extreme steps, such as building new Internet Exchange Points (IXPs), which allow networks to interconnect directly, and encouraging local interconnection to keep local traffic local. We find that although many of these efforts are extensive, they are often futile, due to the inherent lack of hosting and route diversity for many popular sites. By measuring the country-level paths to popular domains, we characterize transnational routing detours. We find that traffic is traversing known surveillance states, even when the traffic originates and ends in a country that does not conduct mass surveillance. Then, we investigate how clients can use overlay network relays and the open DNS resolver infrastructure to prevent their traffic from traversing certain jurisdictions. We find that 84\% of paths originating in Brazil traverse the United States, but when relays are used for country avoidance, only 37\% of Brazilian paths traverse the United States. Using the open DNS resolver infrastructure allows Kenyan clients to avoid the United States on 17\% more paths. Unfortunately, we find that some of the more prominent surveillance states (e.g., the U.S.) are also some of the least avoidable countries. Introduction When Internet traffic enters a country, it becomes subject to that country's laws. As a result, users have more need than ever to determine-and control-which countries their traffic is traversing. An increasing number of countries have passed laws that facilitate mass surveillance of their networks [24,32,35,39], and governments and citizens are increasingly motivated to divert their Internet traffic from countries that perform surveillance (notably, the United States [17,18,48]). Many countries-notably, Brazil-are taking impressive measures to reduce the likelihood that Internet traffic transits the United States [9-11, 14, 30] including building a 3,500-mile long fiber-optic cable from Fortaleza to Portugal (with no use of American vendors); pressing companies such as Google, Facebook, and Twitter (among others) to store data locally; and mandating the deployment of a statedeveloped email system (Expresso) throughout the federal government (instead of what was originally used, Microsoft Outlook) [8,12]. Brazil is also building Internet Exchange Points (IXPs) [7], now has the largest national ecosystem of public IXPs in the world [15], and the number of internationally connected Autonomous Systems (ASes) continues to grow [13]. Brazil is not alone: IXPs are proliferating in eastern Europe, Africa, and other regions, in part out of a desire to "keep local traffic local". Building IXPs alone, of course, cannot guarantee that Internet traffic for some service does not enter or transit a particular country: Internet protocols have no notion of national borders, and interdomain paths depend in large part on existing interconnection business relationships (or lack thereof). Although end-to-end encryption stymies surveillance by concealing URLs and content, it does not by itself protect all sensitive information from prying eyes. First, many websites do not fully support encrypted browsing by default; a recent study showed that more than 85% of the most popular health, news, and shopping sites do not encrypt by default [57]; migrating a website to HTTPS is challenging doing so requires all third-party domains on the site (including advertisers) to use HTTPS. Second, even encrypted traffic may still reveal a lot about user behavior: the presence of any communication at all may be revealing, and website fingerprinting can reveal information about content merely based on the size, content, and location of third-party resources that a client loads. DNS traffic is also quite revealing and is essentially never encrypted [57]. Third, ISPs often terminate TLS connections, conducting man-in-the-middle attacks on encrypted traffic for network management purposes [27]. Circumventing surveillance thus requires not only encryption, but also mechanisms for controlling where traffic goes in the first place. area, which has focused on analyzing Border Gateway Protocol (BGP) routes [34,50]. Although BGP routing can offer useful information about paths, it does not necessarily reflect the path that traffic actually takes, and it only provides ASlevel granularity, which is often too coarse to make strong statements about which countries that traffic is traversing. In contrast, we measure traffic routes from RIPE Atlas probes in five countries to the Alexa Top 100 domains for each country; we directly measure the paths not only to the websites corresponding to the themselves, but also to the sites hosting any third-party content on each of these sites. Determining which countries a client's traffic traverses is challenging, for several reasons. First, performing direct measurements is more costly than passive analysis of BGP routing tables; RIPE Atlas, in particular, limits the rate at which one can perform measurements. As a result, we had to be strategic about the origins and destinations that we selected for our study. As we explain in Section 2, we study five geographically diverse countries, focusing on countries in each region that are making active attempts to thwart transnational Internet paths. Second, IP geolocation-the process of determining the geographic location of an IP address-is notoriously challenging, particularly for IP addresses that represent Internet infrastructure, rather than end-hosts. We cope with this inaccuracy by making conservative estimates of the extent of routing detours, and by recognizing that our goal is not to pinpoint a precise location for an IP address as much as to achieve accurate reports of significant off-path detours to certain countries or regions. (Section 3 explains our method in more detail; we also explicitly highlight ambiguities in our results.) Finally, the asymmetry of Internet paths can also make it difficult to analyze the countries that traffic traverses on the reverse path from server to client; our study finds that country-level paths are often asymmetric, and, as such, our findings represent a lower bound on transnational routing detours. The first part of our study (Section 3) characterizes the current state of transnational Internet routing detours. We first explore hosting diversity and find that only about half of the Alexa Top 100 domains in the five countries studied are hosted in more than one country, and many times that country is a surveillance state that clients may want to avoid. Second, even if hosting diversity can be improved, routing can still force traffic through a small collection of countries (often surveillance states). Despite strong efforts made by some countries to ensure their traffic does not transit unfavorable countries [9][10][11]14,30], their traffic still traverses surveillance states. Over 50% of the top domains in Brazil and India are hosted in the United States, and over 50% of the paths from the Netherlands to the top domains transit the United States. About half of Kenyan paths to the top domains traverse the United States and Great Britain (but the same half does not traverse both countries). Much of this phenomenon is due to "tromboning", whereby an Internet path starts and ends in a country, yet transits an intermediate country; for example, about 13% of the paths that we explored from Brazil tromboned through the United States. Infrastructure building alone is not enough: ISPs in respective regions need better encouragements to interconnect with one another to ensure that local traffic stays local. The second part of our work (Section 4) explores potential mechanisms for avoiding certain countries, and the potential effectiveness of these techniques. We explore two techniques: using the open DNS resolver infrastructure and using overlay network relays. We find that both of these techniques can be effective for clients in certain countries, yet the effectiveness of each technique also depends on the county. For example, Brazilian clients could completely avoid Spain, Italy, France, Great Britain, Argentina, and Ireland (among others), even though the default paths to many popular Brazilian sites traverse these countries. Additionally, overlay network relays can keep local traffic local: by using relays in the client's country, fewer paths trombone out of the client's country. The percentage of tromboning paths from the United States decreases from 11.2% to 1.3% when clients take advantage of a small number of overlay network relays. We also find that some of the most prominent surveillance states are also some of the least avoidable countries. For example, many countries depend on ISPs in the United States, a known surveillance state, for connectivity to popular sites and content. Brazil, India, Kenya, and the Netherlands must traverse the United States to reach many of the popular local websites, even if they use open resolvers and network relays. Using overlay network relays, both Brazilian and Netherlands clients can avoid the United States for about 65% of paths; yet, the United States is completely unavoidable for about 10% of the paths because it is the only country where the content is hosted. Kenyan clients can only avoid the United States on about 55% of the paths. On the other hand, the United States can avoid every other country except for France and the Netherlands, and even then they are avoidable for 99% of the top domains. State of Surveillance We focused our study on five different countries, and for each, we actively measured and analyzed traffic that originated there. These five countries were chosen for specific reasons and we present them here. We also discuss countries that currently conduct surveillance; this exploration is not exhaustive, but highlights countries that are passing new surveillance laws and countries that have strict surveillance practices already. Studied Countries We selected Brazil, Netherlands, Kenya, India, and the United States for the following reasons. Brazil. It has been widely publicized that Brazil is actively trying to avoid having their traffic transit the United States. They have been building IXPs, deploying underwater cables to Europe, and pressuring large U.S. companies to host content within Brazil [7-12, 14, 30]. This effort to avoid traffic transitting a specific country led us to investigate whether their efforts have been successful or not. Netherlands. We selected to study the Netherlands for three reasons: 1) the Netherlands is beginning to emerge as a site where servers are located for cloud services, such as Akamai, 2) the Netherlands is where a large IXP is located (AMS-IX), and 3) they are drafting a mass surveillance law [39]. Analyzing the Netherlands will allow us to see what effect AMS-IX and the emergence of cloud service hosting has on their traffic. Kenya. Prior research on the interconnectivity of Africa [22,28] led us to explore the characterization of an African country's interconnectivity. We chose Kenya for a few reasons: 1) it is a location with many submarine cable landing points, 2) it has high Internet access and usage (for the East African region), and 3) it has more than one IXP [1, 52]. India. India has one of the highest number of Internet users in Asia, second only to China, which has already been wellstudied [53,56]. With such a high number of Internet users, and presumably a large amount of Internet traffic, we study India to see where this traffic is going. United States. We chose to study the United States because of how inexpensive it is to host domains there, the prevalence of Internet and technology companies located there, and because it is a known surveillance state. Surveillance States When analyzing which countries Internet traffic traverse, special attention should be given to countries that may be unfavorable because of their surveillance laws. Some of the countries that are currently conducting surveillance are the "Five Eyes" [21,36] (the United States, Canada, United Kingdom, New Zealand, and Australia), as well as France, Germany, Poland, Hungary, Russia, Ukraine, Belarus, Kyrgyzstan, and Kazakhstan. Five Eyes. The "Five Eyes" participants are the United States National Security Agency (NSA), the United Kingdom's Government Communications Headquarters (GCHQ), Canada's Communications Security Establishment Canada (CSEC), the Australian Signals Directorate (ASD), and New Zealand's Government Communications Security Bureau (GCSB) [21]. According to the original agreement, the agencies can: 1) collect traffic; 2) acquire communications documents and equipment; 3) conduct traffic analysis; 4) conduct cryptanalysis; 5) decrypt and translate; 6) acquire information about communications organizations, procedures, practices, and equipment. The agreement also implies that all five countries will share all intercepted material by default. The agencies work so closely that the facilities are often jointly staffed by members of the different agencies, and it was reported "that SIGINT customers in both capitals seldom know which country generated either the access or the product itself." [36]. A number of other countries are passing laws to facilitate mass surveillance. These laws have differing levels of intensity, which can be seen in Table 1 least intense surveillance laws are listed at the top of the table, and those with the more intense laws are listed at the bottom. These countries, along with the "Five Eyes" participants should be flagged when characterizing transnational detours in the following section. Characterizing Transnational Detours In this section, we describe our measurement methods, the challenges in conducting them, and our findings concerning the transnational detours of default Internet paths. Figure 2 summarizes our measurement process, which the rest of this section describes in detail. We analyze traceroute measurements to discover which countries are on the path from a client in a particular country to a popular domain. Using traceroutes to measure transnational detours is new; prior work used BGP routing tables to infer country-level paths [34]. Because we conduct active measurements, which are limited by our resources, we make a tradeoff and study five countries, as opposed to all countries' Internet paths. We report on measurements that we conducted on January 31, 2016. Resource Limitations The iPlane [37] and Center for Applied Internet Data Analysis (CAIDA) [16] projects maintain two large repositories of traceroute data, neither of which turn out to be suitable for our study. iPlane measurements use PlanetLab [44] nodes and has historical data as far back as 2006. Unfortunately, because iPlane uses PlanetLab nodes, which mostly use the Global Research and Education Network (GREN), the traceroutes from PlanetLab nodes will not be representative of typical Internet users' traffic paths [5]. CAIDA runs traceroutes from different vantage points around the world to randomized destination IP addresses that cover all /24s; in contrast, we focus on paths to popular websites from a particular country. Figure 1: Measurement pipeline to study Internet paths from countries to popular domains. In contrast to these existing studies, we run active measurements that would represent paths of a typical Internet user. To do so, we run DNS and traceroute measurements from RIPE Atlas probes, which are hosted all around the world and in many different settings, including home networks [46]. RIPE Atlas probes can use the local DNS resolver, which would give us the best estimate of the traceroute destination. Yet, conducting measurements from a RIPE Atlas probe costs a certain amount of "credits", which restricts the number of measurements that we could run. RIPE Atlas also imposes rate limits on the number of concurrent measurements and the number of credits that an individual user can spend per day. We address these challenges in two ways: (1) we reduce the number of necessary measurements we must run on RIPE Atlas probes by conducting traceroute measurements to a single IP address in each /24 (as opposed to all IP address returned by DNS) because all IP addresses in a /24 belong to the same AS, and should therefore be located in the same geographic area; (2) we use a different method-VPN connections-to obtain a vantage point within a foreign country, which is still representative of an Internet user in that country. Path Asymmetry The reverse path is just as important as (and often different from) the forward path. Previous work has shown that paths are not symmetric most of the time-the forward path from point A to point B does not match the reverse path from point B to point A [29]. Most work on path asymmetry has been done at the AS level, but not at the country level. Our measurements consider only the forward path (from client to domain or relay), not the reverse path from the domain or relay to the client. We measured path asymmetry at the country granularity. If country-level paths are symmetric, then the results of our measurements would be representative of the forward and reverse paths. If the country-level paths are asymmetric, then our measurement results only provide a lower bound on the number of countries that could potentially conduct surveillance. Using 100 RIPE Atlas probes located around the world, and eight Amazon EC2 instances, we ran traceroute measurements from every probe to every EC2 instance and from every EC2 instance to every probe. After mapping the IPs to countries, we analyzed the paths for symmetry. First, we compared the set of countries on the forward path to the set of countries on the reverse path; this yielded about 30% symmetry. What we wanted to know is whether or not the reverse path has more countries on it than the forward path. Thus, we measured how many reverse paths were a subset of the respective forward path; this was the case for 55% of the paths. This level of asymmetry suggests that our results represent a lower bound on the number of countries that transit traffic; our results are a lower bound on how many unfavorable countries transit a client's path. It also suggests that while providing lower bounds on transnational detours is feasible, designing systems to completely prevent these detours on both forward and reverse paths may be particularly challenging, if not impossible. Traceroute Origination and Destination Selection Each country hosts a different number of RIPE Atlas probes, ranging from about 75 probes to many hundreds. Because of the resource restrictions, we could not use all probes in each of the countries. We selected the set of probes that had unique ASes in the country to get the widest representation of origination (starting) points. For destinations, we used the Alexa Top 100 domains in each of the respective countries, as well as the third-party domains that are requested as part of an original web request. To obtain these 3rd party domains we curl (i.e., HTTP fetch) each of the Top 100 domains, but we must do so from within the country we are studying. There is no current functionality to curl from RIPE Atlas probes, so we establish a VPN connection within each of these countries to curl each domain and extract the third-party domains; we curl from the client's location in case web sites are customizing content based on the region of the client. Country Mapping Accurate IP geolocation is challenging. We use MaxMind's geolocation service to map IP addresses to their respective countries [38], which is known to contain inaccuracies. Fortunately, our study does not require high-precision geolocation; we are more interested in providing accurate lower bounds on detours at a much coarser granularity. Fortunately, previous work has found that geolocation at a country-level granularity is more accurate than at finer granularity [31]. In light of these concerns, we post-processed our IP to country mapping by removing all IP addresses that resulted in a 'None' response when querying MaxMind, which causes our results to provide a conservative estimate of the number of countries that paths traverse. It is important to note that removing 'None' responses will always produce a conservative estimate, and therefore we are always underestimating the amount of potential surveillance. Figure 2 shows an example of this post-processing. Table 2 shows the five countries we studied along the top of the table, and the countries that host their content along in each row. For example, the United States is the endpoint of 77% of the paths that originate in Brazil. A "-" represents the case where no paths ended in that country. For example, no Brazilian paths terminated in South Africa. Table 3 shows the fraction of paths that transit certain countries, with a row for each country that is transited. First we analyze hosting diversity; this shows us how many unique countries host a domain. The more countries that a domain is hosted in creates a greater chance that the content is replicated in a favorable country, and could potentially allow a client to circumvent an unfavorable country. We queried DNS from 26 vantage points around the world, which are shown in Figure 3; we chose this set of locations because they are geographically diverse. Then we mapped the IP addresses in the DNS responses to their respective countries to determine how many unique countries a domain is hosted in. Figure 4 shows the fraction of domains that are hosted in different numbers of countries; we can see two common hosting cases: 1) CDNs, and 2) a single hosting country. This shows that many domains are hosted in a single unique country, which leads us to our next analysis-where are these domains hosted, and which countries are traversed on the way to reach these locations. [51]. Additionally, there is a cable from Mombasa, Kenya to Fujairah, United Arab Emirates, which likely explains the large fraction of paths that include these countries. Figures 5a, 5b, and 5c show the fraction of paths that trombone to different countries for the Netherlands, Brazil, and Kenya. 24% of all paths originating in the Netherlands (62% of domestic paths) trombone to a foreign country before returning to the Netherlands. Despite Brazil's strong efforts in building IXPs to keep local traffic local, we can see that their paths still trombone to the United States. This is due to IXPs being seen as a threat by competing commercial providers; providers are sometimes concerned that "interconnection" will result in making business cheaper for competitors and stealing of customers [45]. It is likely that Brazilian providers see other Brazilian providers as competitors and therefore as a threat at IXPs, which cause them to peer with international providers instead of other local providers. Additionally, we see Brazilian paths trombone to Spain and Italy. We have observed that MaxMind sometimes mislabels IP addresses to be in Spain when they are actually located in Portugal. This mislabelling does not affect our analysis of detours through surveillance states, as we do not highlight either Spain or Portugal as a surveillance state. We see Italy often in tromboning paths because Telecom Italia Sparkle is one of the top global Internet providers [4]. Results Tromboning Kenyan paths most commonly traverse Mauritius, which is expected considering the submarine cables between Kenya and Mauritius. Submarine cables also explain South Africa, Tanzania, and the United Arab Emirates on tromboning paths. Finding 3.6 (United States as an Outlier): The United States hosts 97% of the content that is accessed from within the country, and only five foreign countries-France, Germany, Ireland, Great Britain, and the Netherlands-host content for the other 3% of paths. Many of the results find that Brazilian, Netherlands, Indian, and Kenyan paths often transit surveillance states, most notably the United States. The results from studying paths that originate in the United States are drastically different from those of the other four countries. The other four countries host very small amounts of content accessed from their own country, whereas the United States hosts 97% of the content that is accessed from within the country. Only 13 unique countries are ever on a path from the United States to a domain in the top 100 (or third party domain), whereas 30, 30, 25, and 38 unique countries are seen on the paths originating in Brazil, Netherlands, India, and Kenya, respectively. Limitations This section discusses the various limitations of our measurement methods and how they may affect the results that we have reported. Traceroute accuracy and completeness. Our study is limited by the accuracy and completeness of traceroute. Anomalies can occur in traceroute-based measurements [2], but most traceroute anomalies do not cause an overestimation in surveillance states. The incompleteness of traceroutes, where a router does not respond, causes our results to underestimate the number of surveillance states, and therefore also provides a lower bound on surveillance. IP Geolocation vs. country mapping. Previous work has shown that there are fundamental challenges in deducing a geographic location from an IP address, despite using different methods such as DNS names of the target, network delay measurements, and host-to-location mapping in conjunction with BGP prefix information [43]. While it has been shown that there are inaccuracies and incompleteness in MaxMind's data [31], the focus of this work is on measuring and avoiding surveillance. We use Maxmind to map IP to country (as described in Section 3.1.4), which provides a lower bound on the amount of surveillance, as we have described. IPv4 vs. IPv6 connectivity. The measurements we conducted only collect and analyze IPv4 paths, and therefore all IPv6 paths are left out of our study. IPv6 paths likely differ from IPv4 paths as not all routers that support IPv4 also support IPv6. Future work includes studying IPv6 paths and which countries they transit, as well as a comparison of country avoidability between IPv4 and IPv6 paths. Preventing Transnational Detours In light of our analysis of the state of default Internet paths from Section 3, we now explore the extent to which various techniques and systems can help clients in various countries prevent unwanted transnational routing detours. We explore two different mechanisms for increasing path diversity: discovering additional website replicas by diverting DNS queries through global open DNS resolvers and creating additional network-layer paths with the use of overlay nodes. We discuss our measurement methods, develop an avoidance metric and algorithm, and present our results. Measurement Approach Country Avoidance with Open Resolvers. If content is replicated on servers in different parts of the world, open DNS resolvers located around the world may also help clients discover a more diverse set of replicas. Figure 6 illustrates our measurement approach for this study, which differs slightly from that described in Section 3.1: instead of using RIPE Atlas probes to query local DNS resolvers, we query open DNS resolvers located around the world [33]. These open DNS resolvers may provide different IP addresses in the DNS responses, which represent different locations of content replicas. The measurement study in Section 3.1 used RIPE Atlas probes to traceroute to the IP addresses in DNS response; in contrast, for this portion of the study we initiate a VPN connection to the client's country and traceroute (through the VPN connection) to the IP addresses in the DNS responses returned by the open resolvers. Country Avoidance with Relays. Using an overlay network may help clients route around unfavorable countries or access content that is hosted in a different country. Figure 7 shows the steps to conduct this measurement. After selecting relay machines, we run traceroute measurements from Country X to each relay and from each relay to the set of domains. We then analyze these traceroutes using the pipeline in Figure 2 to determine country-level paths. We use eight Amazon EC2 instances, one in each geographic region (United States, Ireland, Germany, Singapore, South Korea, Japan, Australia, Brazil), as well as 4 Virtual Private Server (VPS) machines (France, Spain, Brazil, Singapore), which are virtual machines that are functionally equivalent to dedicated physical servers. Figure 7: Measurement approach for country avoidance with overlay network relays. 4.Collect Responses these two sets of machines allow us to evaluate surveillance avoidance with a geographically diverse set of relays. By selecting an open resolver in each country that also has a relay in it we can keep the variation in measurement methods low, leading to a more accurate comparison of country avoidance methods. Avoidability Metrics We introduce a new metric and algorithm to measure how often a client in Country X can avoid a specific country Y. Using the proposed metric and algorithm, we can compare how well the different methods achieve country avoidance for any (X, Y) pair. Avoidability metric. We introduce an avoidability metric to quantify how often traffic can avoid Country Y when it originates in Country X. Avoidability is the fraction of paths that start in Country X and do not transit Country Y. We calculate this value by dividing the number of paths from Country X to domains that do not traverse Country Y by the total number of paths from Country X. The resulting value will be in the range [0,1], where 0 means the country is unavoidable for all of the domains in our study, and 1 means the client can avoid Country Y for all domains in our study. For example, there are three paths originating in Brazil: (1) BR → US, (2) BR → CO → None, 3) BR → * * * → BR. After processing the paths as described in Section 3.1.4, the resulting paths are: (1) BR → US, (2) BR → CO, (3) BR → BR. The avoidance value for avoiding the United States would be 2/3 because two out of the three paths do not traverse the United States. This metric represents a lower bound, because it is possible that the third path timed out ( * * * ) because it traversed the United States, which would make the third path: for each (relay, path) in paths1 do 4: if c not in path then 5: usableRelays ← path 6: set accessibleDomains 7: for each (relay, domain, path in paths2 do 8: if relay in usableRelays then 9: if c not in path then 10: accessibleDomains ← domain 11: D ← number of all unique domains in paths2 12: A ← length of accessibleDomains 13: return A/D fraction of domains accessible from the client in Country X without traversing Country Y. Avoidability algorithm with relays. Measuring the avoidability of a Country Y from a client in Country X using relays has two components: (1) Is Country Y on the path from the client in Country X to the relay? (2) Is Country Y on the path from the relay to the domain? For every domain, our algorithm checks if there exists at least one path from the client in Country X through any relay and on to the domain, and does not transit Country Y. The algorithm (Algorithm 1) produces a value in the range [0,1] that can be compared to the output of the avoidability metric described above. Upper bound on avoidability. Although the avoidability metric and algorithm provide a method to quantify how avoidable Country Y is from a client in Country X, it may be the case that a number of domains are only hosted in Country Y, so the avoidance value for these countries would never reach 1.0. For this reason, we measured the upper bound on avoidance for a given pair of (Country X, Country Y) that represents the best case value for avoidance. Algorithm 2 shows the pseudocode for computing this metric. The algorithm analyzes the destinations of all domains from all relays and if there exists at least one destination for a domain that is not in Country Y, then this increases the upper bound value. An upper bound value of 1.0 means that every domain studied is hosted (or has a replica) outside of Country Y. This value puts the avoidance values in perspective for each (Country X, Country Y) pair. Results We compared avoidance values when using open resolvers, when using relays, and when using no country avoidance tool. First, we discuss how effective open resolvers are at country avoidance. We then examine the effectiveness of relays for country avoidance, as well as for keeping local traffic local. Table 4 shows avoidance values; the top row shows the countries we studied and the left column shows the country that the client aims to avoid. A ← length of accessibleDomains 12: return A/D Avoidance with Open Resolvers A given country is more avoidable (higher avoidance value) when open resolvers are used as a tool for country avoidance. Avoidance with Relays As seen in Table 4, there are two significant trends: 1) the ability for a client to avoid a given Country Y increases with the use of relays, and 2) the least avoidable countries are surveillance states. Table 4 the avoidance with relays reaches the upper bound on avoidance. In almost every (Country X, Country Y) pair, where Country X is the client's country (Brazil, Netherlands, India, Kenya, or the United States) and Country Y is the country to avoid, the use of an overlay network makes Country Y more avoidable than the default routes. The one exception we encountered is when a client is located in Kenya and wants to avoid South Africa, where, as mentioned, all paths through our relays exit Kenya via South Africa. Relays are most effective for clients in the United States. On the other hand, it is much rarer for (Kenya, Country Y) pairs to achieve the upper bound of surveillance, showing that it is more difficult for Kenyan clients to avoid a given country. This is not to say that relays are not effective for clients in Kenya; for example, the default routes to the top 100 domains for Kenyans avoid Great Britain 50% of the time, but with relays this percentage increases to about 97% of the time, and the upper bound is about 98%. Despite increasing clients' ability to avoid the United States, relays are not as effective at helping clients avoid this country as compared to the effectiveness of the relays at avoiding all other Country Y. Clients in India can avoid the United States more often than clients in Brazil, Netherlands, and Kenya, by avoiding the United States for 65% of paths. Kenyan clients can only avoid the United States 40% of the time even while using relays. Additionally, the upper bound for avoiding the United States is significantly lower in comparison to any other country. For the cases where there were relays located in one of the five studied countries, we evaluated how effectively the use of relays kept local traffic local. This evaluation was possible for Brazil and the United States. Tromboning Brazilian paths decreased from 13.2% without relays to 9.7% with relays; when relays are used, all tromboning paths goes only to the United States. With the use of relays, there was only 1.3% tromboning paths for a United States client, whereas without relays there was 11.2% tromboning paths. For the 1.2% of paths that trombones from the United States, it goes only to Ireland. Discussion Avoiding multiple countries. We have studied only the extent to which Internet paths can be engineered to avoid a single country. Yet, avoiding a single country may force an Internet path into other unfavorable jurisdictions. This possibility suggests that we should also be exploring the feasibility of avoiding multiple surveillance states (e.g., the "Five Eyes") or perhaps even entire regions. It is already clear that avoiding certain combinations of countries is not possible, at least given the current set of relays; for example, to avoid the US, Kenyan clients rely on the relay located in Ireland, so avoiding both countries is often impossible. The evolution of routing detours and avoidance over time. Our study is based on a snapshot of Internet paths. Over time, paths change, hosting locations change, IXPs are built, submarine cables are laid, and surveillance states change. Future work can and should involve exploring how these paths evolve over time, and analyzing the relative effectiveness of different strategies for controlling traffic flows. Isolating DNS diversity vs. path diversity. In our experiments, the overlay network relays perform DNS lookups from geographically diverse locations, which provides some level of DNS diversity in addition to the path diversity that the relays inherently provide. This approach somewhat conflates the benefits of DNS diversity with the benefits of path diversity and in practice may increase clients' vulnerability to surveillance, since each relay is performing DNS lookups on each client's behalf. We plan to conduct additional experiments where the client relies on its local DNS resolver to map domains to IP addresses, as opposed to relying on the relays for both DNS resolution and routing diversity. Related Work Nation-state routing analysis. Recently, Shah and Papadopoulos measured international BGP detours (paths that originate in one country, cross international borders, and then return to the original country) [50]. Using BGP routing tables, they found 2 million detours in each month of their study (out of 7 billion total paths), and they then characterized the detours based on detour dynamics and persistence. Our work differs by actively measuring traceroutes (actual paths), as opposed to analyzing BGP routes. This difference is fundamental as BGP provides the AS path announced in BGP update messages, which is not necessarily the same as the actual path of data packets. Obar and Clement analyzed traceroutes that started and ended in Canada, but "boomeranged" through the United States ("boomerang" is another term for tromboning), and argued that this is a violation of Canadian network sovereignty [41]. Most closely related to our work, Karlin et al. developed a framework for country-level routing analysis to study how much influence each country has over interdomain routing [34]. This work measures the centrality of a country to routing and uses AS-path inference to measure and quantify country centrality, whereas our work uses active measurements and measures avoidability of a given country. Mapping national Internet topologies. In 2011, Roberts et al. described a method for mapping national networks of ASes, identifying ASes that act as points of control in the national network, and measuring the complexity of the national network [47]. There have also been a number of studies that measured and classified the network within a country. Wahlisch et al. measured and classified the ASes on the German Internet [54,55], Zhou et al. measured the complete Chinese Internet topology at the AS level [58], and Bischof et al. characterized the current state of Cuba's connectivity with the rest of the world [6]. Interconnectivity has also been studied at the continent level; Gupta et al. first looked at ISP interconnectivity within Africa [28], and it was studied later by Fanou et al. [22]. Circumvention Systems. There has been research into circumvention systems, particularly for censorship circumvention, that is related this work, but not sufficient for surveillance circumvention. Tor is an anonymity system that uses three relays and layered encryption to allow users to communicate anonymously [19]. VPNGate is a public VPN relay system aimed at circumventing national firewalls [40]. Unfortunately, VPNGate does not allow a client to choose any available VPN, which makes surveillance avoidance harder. Conclusion We have measured Internet paths to characterize routing detours that take Internet paths through countries that perform surveillance. Our findings show that paths commonly traverse known surveillance states, even when they originate and end in a non-surveillance state. As a possible step towards a remedy, we have investigated how clients can use the open DNS resolver infrastructure and overlay network relays to prevent routing detours through unfavorable jurisdictions. These methods give clients the power to avoid certain countries, as well as help keep local traffic local. Although some countries are completely avoidable, we find that some of the more prominent surveillance states are the least avoidable. Our study presents several opportunities for follow-up studies and future work. First, Internet paths continually evolve; we will repeat this analysis over time and publish the results and data on a public website, to help deepen our collective understanding about how the evolution of Internet connectivity affects transnational routes. Second, our analysis should be extended to study the extent to which citizens in one country can avoid groups of countries or even entire regions. Finally, although our results provide strong evidence for the existence of various transnational data flows, factors such as uncertain IP geolocation make it difficult to provide clients guarantees about country-level avoidance; developing techniques and systems that offer clients stronger guarantees is a ripe opportunity for future work.
2016-05-24T23:23:21.000Z
2016-05-24T00:00:00.000
{ "year": 2016, "sha1": "4d5e994a7d31a135b69b06e70a429361471e9ab5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e86e4bca4afa8b9f04a80a2533a84a54684ed523", "s2fieldsofstudy": [ "Computer Science", "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
90471843
pes2o/s2orc
v3-fos-license
Identifying factors that in fl uence the success of forestry research projects implemented in developing countries: case study results from Vietnam This paper reports a qualitative investigation of factors contributing to success in 10 collaborative international forestry research projects funded by the Australian Centre for International Agricultural Research (ACIAR) in Vietnam. Success factors were identi fi ed, and the relative success of projects was evaluated in terms of research achievements and impacts, through analysis of ACIAR ’ s project records and interviews with key project participants. This process identi fi ed 22 factors considered to either enhance or diminish project success, with the most frequently identi fi ed being: collaborative scoping and design; skills mix and time allocations; funding and equipment; scientists ’ commitment and collaboration; and capacity building. Three projects, representing different categories of assessed research achievement and impact, were examined for evidence of relationships between these success factors and the relative success of the projects. This assessment sug-gested that most of the identi fi ed success factors were evident in the project with high research achievements and high impacts; and, conversely, that there was evidence of factors that diminish project success in a project that had low achievements and low impacts. The results reported here can help improve the design and implementation of future collaborative forestry research projects. Introduction International collaborative research in agricultural and natural resource management is often funded through Official Development Assistance (ODA) programs, and evaluations have shown such investments can generate significant benefits to farmers and rural communities (Raitzer, 2003;Lindner et al., 2013). The conduct of international agricultural research is a complex activity, producing a wide variety of outputs, which are influenced by factors such as the capacity of the collaborating partners and the stage of activities in the research-fordevelopment continuum (Bantilan et al., 2004). In addition, the pathways from research to impact in agriculture, forestry, fisheries and natural resources research are complex and non-linear (Millstone et al., 2010;Mayne and Stern, 2013;Joly et al., 2015), and definitions of 'success' can be contested and controversial (McLeod et al., 2012). ODA interventions interact with other factors and rarely lead to development outcomes on their own; consequently, there are various challenges in establishing relationships between an intervention and its impact (Stern et al., 2012). Similarly, even well-designed evaluations of research investments may not lead to organizational learning for research project leaders, team members or funders (Forss et al., 1994;Horton and Mackay, 2003). For example, findings from economic impact assessments may not identify why changes occurred, or how to improve future research programs (Horton and Mackay, 2003). In this context, this article seeks to identify factors that affect the success of international collaborative forestry research projects, and explore whether there is an apparent relationship between these factors and the evaluated level of success of a project. We investigated these questions through a comparative qualitative analysis of 10 collaborative forestry research projects between Australia and Vietnam. We evaluated the relative success of each project from project records, using a previously developed methodology (Bartlett, 2016a), and surveyed the views of key project participants, and then sought evidence of how the factors were manifested in projects with different levels of success. We distil lessons that are able to be influenced, enhanced or facilitated by those who design and fund ODA-research projects and those with responsibility for implementing these projects. Our approach was informed by that of McLeod et al. (2012), who advocated a qualitative approach focused on 'understanding how the various project stakeholders subjectively perceived project outcomes and the evaluation criteria they drew on in doing so'. There is limited published literature that documents the generic factors that affect the success of ODA-funded forestry research projects. As Blamey and Mackenzie (2007) have noted, context can be the key to uncovering the circumstances in which, and the reasons why, a particular intervention works. Because each project inevitably faces its own unique set of opportunities and constraints, it is often difficult to define which factors are unique and context dependent, and which are more widely applicable. There are many external factors that can play a role in determining the ultimate impact (or lack of impact) for any given project. Some examples from the literature include the availability of the technologies, such as improved germplasm (Franzel et al., 2004); dissemination of knowledge in a form appropriate to the users (Thangata and Alavalapati, 2003); their capacity to take risks (Mercer, 2004); market incentives (Pattanayak et al., 2003); security of land tenure (Suyanto et al., 2005) and their access to ancillary resources such as skills and finance (Farrington et al., 1997). Forestry research typically involves complex systems involving biophysical and social elements and which, compared with agricultural systems, require much longer time frames to produce the desired products (Henderson, 2000). For forestry research projects undertaken in developing countries, achieving positive impacts is likely to depend on multiple factors, which can be interdependent (Byron, 2001). The Australian Centre for International Agricultural Research The Australian Centre for International Agricultural Research (ACIAR) is a federally funded agency that commissions collaborative agriculture, fisheries and forestry research projects in developing countries. ACIAR projects seek to generate knowledge, technologies and capacity to achieve better decisionmaking, changed agricultural practices and policies that, in turn, generate positive scientific, economic, social or environmental impacts (ACIAR, 2014). In ACIAR terminology, projects generate outputs which, if adopted, lead to outcomes and impacts. Outputs are defined as the products of the research, including technologies, knowledge, capacity and policy inputs, that can be adopted or used by the 'next users' as inputs for further research; outcomes are changes in practice, products or policies consequent on the adoption of outputs and impacts are changes in markets, the state of common resources and to individuals or communities that can be attributed to the adoption of the research outputs by the 'end users' of the research (Davis et al., 2008). In accordance with its governing legislation (Commonwealth of Australia, 1982), ACIAR funds research projects conducted by Australian or international scientists with scientists in partner countries, with capacity building of research partners supported in parallel with research activities. Over a 30-year period, ACIAR has invested over AUD 100 million to fund 150 forestry projects and activities in 29 countries; most projects have been implemented in Indonesia, Vietnam and Papua New Guinea (Bartlett, 2016b). ACIAR has a commitment to evaluating the effectiveness and benefits of its projects (ACIAR, 2014), with all large projects having externally conducted end-of-project reviews, some projects having adoption studies conducted by former project leaders and~10 per cent of projects subject to externally conducted impact assessments. However, it does not have a standard approach for comparing project achievements or for identifying the factors that contribute to the relative success of projects (Bartlett, 2016a). Defining project success In this paper, success is defined following the interpretation used in other ACIAR studies as having two primary dimensions: the first is the extent to which planned research outputs are achieved and adopted by 'next users', such as the participating scientists, farmers, processors and policy makers, termed achievements; the second is the extent of the impacts resulting from wider adoption of the research outputs by 'end users', typically stakeholders outside the project and often beyond its life, termed impacts (Pearce, 2010). In both dimensions, this study focuses on those factors that could be influenced by those responsible for research design, implementation and support, rather than external factors that are beyond the reach of the project leaders or managers to influence. Carden (2004) presents a complementary approach that focuses on factors beyond the reach of a research project, such as its influence on policy formulation. Factors believed to influence a research project's success There are few studies that report project-level factors contributing to success of agricultural research projects. An ACIAR impact assessment study (Pearce, 2010) surveyed 30 people, who were Australian project leaders or ACIAR-employed research program managers and country managers and identified 14 factors that contributed to successful project outcomes, with the following six factors most often identified by respondents: • Clearly defined objectives and research questions based on a clear stakeholder needs and with a project plan that assigns clear responsibilities to participants. • Strong communication leading to good collaboration, including formal and informal communication arrangements and compatible language skills. • Trust, complementarity and alignment of interests, including effective interpersonal relationships and mutual empathy and respect. • Good project leadership and management support, including the capacity to empower the research team, co-ordinate diverse groups and engender institutional support. • Strong and capable research team, including having the right technical abilities and the time commitment to undertake the required research; and • Institutional support both for the Australian and in-country partner. Forestry This list provides a useful benchmark for this research, which seeks to confirm their applicability for forestry research projects from Vietnam and explore whether or not scientists from the partner country have the same view as Australians on the relevance of these factors. Forestry development and ACIAR's forestry research investments in Vietnam Vietnam is a country of almost 90 million people in South-East Asia. Over the 60 years up to 1995, forest extent declined tõ 9.8 million hectares or 29.6 per cent of Vietnam's land area (Government of Vietnam, 2007), but has since increased to 14.7 million hectares or 44.4 per cent of land area (FAO, 2015). Planted forests have played a very significant role in achieving this restoration of forest cover, with a total of 3.66 million hectares or 25 per cent of Vietnam's forest area being classified as planted (FAO, 2015). Since 1988, the Government of Vietnam has allocated forest land to communities on renewable 50 year leases and much of this has been planted with fast-growing short rotation species such as Eucalyptus and Acacia (Amat et al., 2010). An estimated 250 000 smallholder farmers are growing acacia plantations on rotations of 5-10 years , primarily for the production of pulpwood. Following the Doi Moi economic reform policies of the mid-1980s, the Government of Vietnam introduced a range of measures, including land tenure reforms and forestry policies, such as the 1998 Five Million Hectare Reforestation Program, to encourage smallholder farmers to plant commercial trees. The Vietnam Forestry Development Strategy 2006-2020 aspires to 16.24 million hectares of forest by 2020, including 4.15 million hectares of plantations, and recognizes the contribution that science and technology transfer has made to the quality and efficiency of its afforestation programs (Government of Vietnam, 2007). Both the achievements and concerns about aspects of Vietnam's reforestation program have been discussed in the literature. For example, increasingly substantial economic benefits for smallholders and regional economies are being generated from acacia plantations (Byron, 2014): but these gains followed an initial phase of poor growth associated with use of inferior germplasm or incorrect species-site matching (Nguyen and Gilmour, 1999); and future growth of this sector depends on avoiding environmental degradation (Amat et al., 2010), and improving and sustaining productivity from these plantings . Concerns have been expressed about loss of higher quality agricultural land (de Jong et al., 2006), disruption of existing land use systems (Clement and Amezaga, 2009), and loss of access for collection of nontimber forest products, and inequitable allocation to poor households (McElwee, 2009). Vietnam has a large and expanding timber processing industry, with the annual value of export timber products growing at a rate of 40 per cent between 2000 and 2010 (Phuc and Canby, 2011); by 2005, wood products had become the nation's fifth largest export commodity. Vietnam is now one of the world's largest exporters of secondary wood products, principally furniture, with wood products' export earnings reaching $3.4 billion in 2010 (Phuc and Canby, 2011). However, there may be impediments that prevent smallholders from fully capitalizing on the markets associated with domestic wood processing industries (Putzel et al., 2012). ACIAR's forestry research investments in Vietnam began in 1993 and, until 2011, all projects were undertaken only with the Forest Science Institute of Vietnam, the predecessor of the Vietnam Academy of Forest Sciences. From 1992 to December 2014, ACIAR completed 20 forestry research projects in Vietnam; the majority of these operated in multiple countries, with the activities in Vietnam being part of a larger research project. The projects cover 5 of the 10 research themes from the ACIAR forestry program (Bartlett, 2016b): Theme 1: Domestication and improvement of Australian trees. Theme 2: Silviculture for Australian trees. Theme 3: Domestication and silviculture of non-Australian trees. Theme 4: Forest health and biosecurity. Theme 5: Value added processing and treatment of wood. The domestication and improvement of Australian tree species, which could be grown on short rotations, contributed greatly to the expansion of the planted forests in Vietnam. Various species of Eucalyptus, Melaleuca and Acacia were first introduced to Vietnam in the 1950s and 1960s. ACIAR's projects on the domestication and management of Eucalyptus and Acacia have facilitated significant improvement in the productivity of these Australian trees in Vietnam (Fisher and Gordon, 2007), with 50-100 per cent gains in wood production demonstrated in trials (Harwood et al., 2015). By 2013, the estimated area of Acacia plantations was 1.1 million hectares and there was a further 200 000 hectares of Eucalyptus plantations . Methods The methodology for this case study involved a preparatory phase to identify suitable research projects for the study followed by three phases of research: identification of success factors; evaluation of relative success of projects and identification of relationships between the success factors and the relative success of different projects. This process is illustrated in Figure 1. Phase 0: Identification of projects for the case study In the preparatory phase, 10 of the 20 projects ACIAR implemented in Vietnam between 1994 and 2012 (Table 1) were selected for the case study, taking into account the following factors: • Focusing on medium to large research projects, rather than small research activities. • Ensuring representation of projects from each research theme. • Inclusion of projects across the 20-year period, including some projects that were part of a linked program over at least 10 years. • Inclusion of some projects conducted entirely in Vietnam and some that were regional projects, with smaller components conducted in Vietnam. • Having adequate project records available, including project document, annual final report and external end-of-project review report. Identifying factors that influence the success of forestry research projects implemented in developing countries Phase 1: Identification of project success factors We used qualitative data, derived from interviews with former research project participants, to identify the factors considered to be most influential in achieving or hindering project success. For each project, the Australian project leaders, Vietnamese project coordinators and other scientists who had been involved in each project were interviewed. A total of 24 scientists, comprising 11 from Australia and 13 from Vietnam, were identified from project records and interviewed individually by the primary author using a standard set of questions (available as Supplementary online material). Interviewees were asked to explain what they thought constituted success for an ACIAR project, and then to nominate five factors that can enhance project success, and five factors that can diminish project success. Other questions sought their views about aspects of the project's design, implementation and other contextual factors. The research protocol was approved by the Australian National University Human Ethics Committee (protocol no. 2014/051). HyperRESEARCH (Researchware, Inc.http://www.researchware.com/ accessed 13 June 2014) qualitative data analysis software was used to analyse interview data to establish perspectives on the definition of project success and to facilitate aggregation of thematic aspects of the responses into two lists of factors that contribute to either enhancing or diminishing project success. Individuals' responses to questions about each project's design and implementation were analysed as well as their responses on factors affecting project success. When respondents covered aspects of multiple factors in a single response, each aspect was identified, allocated to the most relevant factor and counted. When the respondents identified aspects related to the same factor in two or more responses, the aspect was counted only once, against the most relevant factor. The primary author compared the two lists to identify complementary expressions of the same factor, and prepared concisely worded statements of the factors that can enhance or diminish the success of research projects. The data were further analysed to identify the frequency of identification of each success factor, to give an indication of which success factors are considered most important, and whether there were any notable differences in the factors identified by Vietnamese or Australian respondents. Phase 2: Evaluation of relative success of the case study projects We used qualitative data drawn from internal ACIAR project records to evaluate the relative success (the evaluation questions and guidance on evidence sought are available as Supplementary online material) of each of the 10 projects. The records included project documents; annual reports; mid-term reviews; final reports; external end-of-project reviews; adoption studies and external impact assessments; project-related publications and written correspondence between ACIAR and project staff. These data provided perspectives from project participants, research program managers and external reviewers of projects. To evaluate relative success, the author used a score-card matrix methodology (Bartlett, 2016a) for each project, and assigned scores for four criteria related to research achievements: project design; results achieved; collaboration and publications; and four criteria related to research impacts: capacity building outcomes; scientific outcomes; economic outcomes; and social and policy outcomes. Under this methodology, scores totalling 10 were assigned for each of research achievements and research impacts, with both research achievements and scientific outcomes criteria assigned scores of up to 4 and all other criteria assigned scores of up to 2. The resulting scores for each of research achievements and research impacts were summed and then graphed. Scores of 0.0-5.0 were considered to be low achievements or low impacts; scores of 5.1-10.0 were considered to be high achievements or high impacts. This approach facilitated the identification of projects that represent one of four project success categories based on the assessed levels of research achievements and impacts: high achievements-high impacts; high achievements-low impacts; low achievements-low impacts and low achievements-high impacts. Phase 3: Identification of relationships between success factors and the level of relative success achieved by different projects To explore possible relationships between the identified success factors and the evaluated relative success of a project, three projects, representing different project success categories, were selected for a more detailed analysis. The nature of the selected projects is shown in Table 2; with further information on the type of research conducted in each project and the way in which various success factors influenced its level of success provided in Appendix 1. As this task was exploratory in nature, two methods were used. Firstly, interview responses (IR) from the Australian and Vietnamese respondents who had held leadership positions in the selected projects were further analysed using HyperRESEARCH to identify any references to the way each of the success factors identified through the Phase 1 methods had enhanced or diminished success. Secondly, relevant project records (PR) for the three projects were reviewed by the primary author to identify any evidence about the way the various success factors may have influenced the project's success. Using these two sources of information, subjective ratings were assigned by the primary author for the apparent influence of each of these success factors on the project's success. The following five category rating system was used: Strongly enhancespresence of factor appears to have strongly enhanced success. Enhancespresence of factor appears to have enhanced success. Neutralno evidence that the factor enhanced or diminished success. Diminishesabsence of factor appears to have diminished success. Strongly diminishesabsence of factor appears to have strongly diminished success. Interpreting success and identifying success factors Views from project participants on what constitutes project success varied considerably, with some finding it difficult to articulate what success meant to them. The HyperRESEARCH analysis enabled the sentiments from the participants' responses to be combined into a definition of success. A successful ACIAR forestry Identifying factors that influence the success of forestry research projects implemented in developing countries research project can be considered to be one which, in the context of the time and resources available, involves good scientific methods, achieves what it set out to do, enhances capacity, facilitates ongoing scientific relationships and generates knowledge or technologies that can improve the system under investigation and result in benefits for the next or end users. The HyperRESEARCH analysis of participants' responses on the factors that can enhance or diminish project success identified 20 factors that they considered to enhance project success and 19 factors they considered to diminish project success (Table 3). When considered as a whole, there were 22 different factors identified that influence project success (Table 3), with Forestry most responses on factors which diminish success being the converse of those nominated for enhancing success. However, among the responses, there were three factors identified that diminish success, and two factors that enhance success, for which there was no converse factor nominated. The interview data comprised 299 participant responses related to individual success factors. The frequency of identification of the 22 success factors by the 11 Australian and 13 Vietnamese respondents, for responses related to both enhancing and diminishing project success, is shown in Figure 2. The two most frequently identified factors, which together represented 20 per cent of the responses, were collaborative scoping and design; and skills mix and time allocations. Twelve of the success factors (Nos. 1-12 from Table 3) together represented 80 per cent of the responses, and so were considered as the most important factors affecting project success in this study. Most of the success factors were identified consistently by Australian and Vietnamese respondents, but there were some differences apparent. Vietnamese respondents more frequently identified success factors such as skills mix and time allocation; mutual benefit of research topic; strong, culturally appropriate relationships; leadership and management; and duration of project. Australian respondents more frequently identified success factors such as: time spent on in-country collaboration; effective communications and research networks; implementation flexibility, monitoring and review; continuity of partner institutions and team; and donor influence on design. Evaluation of the relative success of the forestry projects The results of this analysis (shown in Figure 3) demonstrate that the apparent success of a project can be quite different depending on whether the evaluation focuses on its achievements, its impacts or both its achievements and its impacts. In the evaluation based on research achievements, eight projects (80 per cent) received scores of six or more, whereas in the evaluation based on research impacts, seven projects (70 per cent) received scores of only four or less. If success requires both high achievements and high impacts, then only three projects (30 per cent) could be considered successful. Considering the evaluation scores for both the research achievements and the research impacts, it is apparent that the case study projects represent three categories of project success (see Figure 4): projects with low achievements and low impacts; projects with high achievements but low impacts and projects with high achievements and high impacts. In this case study, there were no examples of projects that had the unlikely combination of low achievements yet high impacts. Identifying factors that influence the success of forestry research projects implemented in developing countries Evidence of success factors in selected projects The primary author's assessment for the apparent influence of each success factor on project success, derived from the IR and evidence from PR, is shown in Table 4. This analysis showed that for the project that had high achievements and high impacts on the evaluation scores, there was good evidence that the presence of most of the success factors strongly enhanced the project's success. Conversely, the evidence from the analysis showed that, for the project that had low achievements and low impacts, nearly half of the success factors were absent. The project that had high achievements but low impacts showed the presence of some success factors and the absence of others, particularly the absence of links to the impact pathway. These relationships were more evident in information from interview records than in project records. This may be because the interview questions were designed to identify this type of information, whereas project records are variable in content and may not contain information specific to the success factors. The analysis also showed that there is a reasonably clear relationship pattern between those success factors which can be influenced during project design (Nos. 1, 2, 3, 6, 7, 16, 17, 20 and 21) and the evaluated level of research achievement and research impact. The high achievement-high impact project showed evidence of almost all of these factors strongly enhancing or enhancing the project's success. This demonstrates the importance of careful consideration of these success factors during the design of a forestry research project. Patterns of relationship were less clear for the 10 success factors, which can be influenced during project implementation. There was evidence that the presence of most of these factors had enhanced the level of success, which suggests that regardless of the quality of the project design, a project team that is well led and focused is more likely achieve the planned project outputs. Similarly, the absence of the success factor related to links to the impact pathway and user benefits appears to have strongly diminished the success of both the high achievementlow impact and the low achievement-low impact projects. Discussion Factors that influence a research project's success Many forestry production systems involve a complex diversity of components, have relatively long production cycles compared with most agricultural crops and involve products that require an efficient value chain and well-developed markets to realize their economic value. This means that forestry research generally requires long-term commitments and multi-faceted programs to generate substantial impacts (Henderson, 2000). Various authors have examined the factors that influence the success of forestry development initiatives which research projects seek to support. For example, preconditions for success of smallholder plantation forestry have been identified as secure land tenure, viable production technologies, the ability to protect trees to maturity and demand and access to profitable markets (Byron, 2001). Factors that influence the success of community forestry programs have been shown to include addressing social, economic and gender inequalities, secure property rights, intra-community governance, government support for community forestry and material benefits to community members (Baynes et al., 2015). While the impact of forestry research projects may be influenced by these factors, there are also other factors that can affect the success of a research project. Almost all of the success factors identified in this study have relevance for project design and/or project implementation, with only three factors (Nos. 15,18 and 19) being beyond the control of those who design and implement research projects and one other factor (No. 13) being only partially under their control. The approach used in this study indicates that project participants can identify a wide range of factors that influence success. It also found that it is possible to demonstrate that there is some relationship between the expression of these success factors in a project and its evaluated level of success. However, the findings on success factors should not be regarded as blueprint for successful projects. Rather, they should be considered carefully during project design and implementation and the relevant factors applied where appropriate. Many of the 22 success factors identified by this study, that can enhance or diminish success of forestry research projects implemented in developing countries, are broadly consistent with those identified in previous studies of research projects (Miles, 1998;Pearce, 2010) and of development projects (Miles, 1998). However, some are additional to those reported previously, and others highlight the importance of particular aspects of previously identified factors. The additional success factors were • provision of adequate funding and facilities to conduct the planned researchthis was the third most frequently identified factor and includes having mechanisms to ensure funds flow to researchers in a timely manner. • team and technical capacity buildingthis was the fifth most frequently identified factor and considered a particularly Forestry important contributor to greater success. It was previously identified only in the study of construction projects (Miles, 1998). It includes on the job training and mentoring, postgraduate study, study tours and work placements with the Australian partner. • site selection and scientific rigour of trailsfor those projects for which this factor is relevant, these included elements such as long-term tenure security, appropriateness for species being planted, support of the local community and research being designed and implemented in a way that will produce scientifically valid results. • implementation flexibility with processes for monitoring and reviewing activitiesthis was more frequently identified by Australian respondents, reflecting the importance of having flexibility within the design, systems for monitoring project activities and donor support to review and adapt project activities including through a mid-term review. • donor influence on project designthis was considered a positive contributing factor when the donor influenced the quality of the science, but a negative factor when donor driven aspects were imposed or unilateral decisions were made. • existence of long-term research collaborationsthis was identified as a factor contributing to greater success, and reflects the contrasting situations of projects that follow a previous project with those that are one-off. • continuation of the research post projectthis was identified by some Australian and Vietnamese respondents, and reflects their view that the willingness of the receiving institution and the scientists to use the new research skills and knowledge to continue related research after the project ends is important in judging a project's success; and • project leader's experience in the partner countrythis was identified only as a contributor to lesser success and reflects the importance of the project leader having a good understanding of the culture and operating environment in the partner country. Of the factors that had been previously identified, and for which this research identified particular aspects, the most significant were • collaborative scoping and designincluding a strong emphasis on the importance of genuine collaboration between the partners in formulating the project design, and the potentially negative impact when Australian scientists insist on aspects of the design, as well as reiterating the importance of properly understanding the topic and situation and then having clear objectives and activities that are not overly ambitious. • skills mix and time allocationsthis included recognition of the importance of having the right skills in the team to conduct the research as well as having adequate time allocations for each scientist working on the project. • institutional supportselecting partner institutions that are genuinely interested and willing to provide institutional support during project implementation. • good leadership and managementthis was considered relevant to both the international and partner sides of the collaboration and includes ensuring partner scientists understand what tasks need to be undertaken and by when. • time in countryfunding sufficient travel to enable adequate time to be spent in country working with the partner scientists. • effective communications and research networkswhile the importance of having good communication within the team has previously been identified, the respondents also emphasized the value of researchers developing and using research networks beyond the team. • trust and interpersonal relationshipsfostering an environment where partner scientists respect and trust each other, with international scientists displaying cultural sensitivity. • project durationhaving sufficient time to achieve the planned research outputs; and • links to impact pathway and user benefitspreviously the importance of having explicit adoption mechanisms had been identified, but this research highlighted the broader issue of embedding the research within the context of the impact pathway and ensuring that the research outputs are relevant to the needs of the end users. Two factors that had previously been identified (Pearce, 2010) as factors that contributed to the success of ACIAR research projects were not identified by the participants in this research. They were having in-country collaborators with good linkages to other relevant agencies; and the involvement of industry and commercial partners. This may be because the original study included participants from a broader range of agricultural and fisheries projects. Relationships between success factors and a project's assessed level of success Previous work by the primary author (Bartlett, 2016a) to develop and test a method for evaluating the relative success of multiple research projects has been extended in this study, by exploring whether relationships exist between a project's assessed level of success and the series of factors thought by project participants to enhance or diminish success. Understanding how the success factors are expressed in projects with different combinations of research achievements and impact could facilitate improvement in the design and implementation of future research projects. Over time, the results of such evaluations and analysis may help to improve the effectiveness of both individual projects and a program of research. The study has shown evidence that these success factors are manifested in different ways in projects with different levels of evaluated success (see Table 4). It is clear that a project that has high research achievements and high impacts is likely to exhibit evidence that most of the identified success factors have contributed to the enhanced success, as illustrated by the domestication of Australian trees project (FST/1998/096). Conversely, a project that has low achievements and low impacts is likely to exhibit evidence of the expression of these factors that diminish project success. In the project on sawing and drying of eucalypt timber (FST/2001/021), factors such as scoping and design, funding, donor influence on project design, selection of trial sites, and leadership and management all contributed to the lower level of success. These relationships with relative project success appear to be strongly evident for the 12 success factors most frequently identified by project participants. Identifying factors that influence the success of forestry research projects implemented in developing countries Conclusions There is a strong emphasis on aid effectiveness in the delivery of ODA-funded research programs (OECD, 2005). In the case of agricultural (and related) research, it is important to have an understanding of the ways in which desirable impacts can be enhanced and adverse impacts diminished (Millstone et al., 2010). Better understanding of the factors that can enhance or diminish the success of different research projects in different circumstances is an important element of this more general understanding. This case study of 10 ACIAR forestry research projects implemented in Vietnam has identified 22 success factors, 12 of which represent 80 per cent of participants' responses, indicating that these factors are likely to have a strong influence on the perceived level of success achieved by a project. The findings from this research on factors that contribute to project success correspond well with those previously identified (Miles, 1998;Pearce, 2010), but also suggest some additional factors and clarified particular aspects of some previously identified factors. Most of the success factors in this study had particular relevance to project design and project implementation. This finding is helpful for research program managers and project leaders, as they have the ability to influence these factors and thereby the ultimate effectiveness of the research project. Forestry This study demonstrated that it is informative to consider both research achievements and impacts when evaluating the success of a research project, and that the success factors identified do relate to levels of project success. Paying attention to success factors related to project design, particularly the degree of collaboration with partners, the experience of the project leader in the country where the project will be implemented and the time allocations for the collaborating scientists, is likely to enhance prospects of the project's success. Success is also influenced by some aspects of project implementation, including the commitment and collaboration of the partners, the degree of capacity building undertaken, the selection of locations for conducting field research, how much time the collaborating scientists are able to spend in country working with their partners, andwhere relevantthe quality and design of experimental sites. There are also factors outside the control of a project that can affect its success, including the longevity of the research collaboration, the continuity of partners involved in a project and the mechanisms that enable research outputs to be widely disseminated to end users. Overall, the results reported here suggest that the qualitative approach applied in this research can help understand why some research projects are more or less successful than others, and that the identification of factors that contribute to the level of project success provides useful guidance for those managing and implementing collaborative forestry research programs and projects. Supplementary data Supplementary data are available at Forestry online. FST/2006/087 'Optimizing silvicultural management and productivity of high-quality acacia plantations, especially for sawlogs' This 4-year project focused on developing silvicultural practices to enable production of sawlogs from smallholder plantations, in support of Vietnam's goal to increase the supply of domestically produced timber for its wood industries (Government of Vietnam, 2007). When acacias are grown for pulpwood rotations of 5-6 years are common, whereas rotations of 10-12 years are needed for one quarter of the logs to achieve sawlog specifications (Byron, 2014). The project followed a 3-year development project (AusAID's Collaboration for Agriculture and Rural Development (CARD) Project Number: 032/05 VIE), involving pruning and thinning trials in acacia plantations in north-central Vietnam, which had showed some promising prospects for sawlog productionalthough some of the trials were impacted by a typhoon in 2008 (Phi et al., 2009). This ACIAR project established new trials involving fertilization, thinning and pruning at seven sites located in southern, central and northern Vietnam and monitored these trials for 3 years. The project was assessed by the author as having high research achievement but low impact. It would always be difficult for a 4-year project on a forestry system that takes 10-12 years to reach rotation age to achieve substantial impacts for end users. The project design included activities to disseminate information to smallholders but these were not implemented during the life of the project. The analysis shows that most of the success factors related to the project implementation phase contributed positively to the success of the project, though there were problems related to poor collaboration between partners in the different regions of Vietnam where the various trials were located. The weaknesses in this project appear to relate predominantly to various success factors related to the project design. The duration of the project meant that, while the project produced good information on the system's productivity up to age three, it could not present conclusive results on the sawlog system's financial returns, which is necessary to convince growers to change their practices and delay income receipt for several years. It was also apparent from the respondents that lack of effective collaboration with Vietnamese partners on the project design and ACIAR's influence on the selection of partners and locations for the research trials diminished the project's success. FST/2001/021 'Improving the value chain for plantation-grown eucalypt sawn wood in China, Vietnam and Australia: sawing and drying' This 4-year project was designed to conduct research related to improving the production of sawn timber from small diameter eucalypt logs, with research conducted in China, Vietnam and Australia. Apart from building research capacity, the project conducted a sawing trial involving 10-yearold Eucalyptus urophylla logs processed in a small sawmill in Vietnam. This analysis focused on the activities conducted in Vietnam but it is apparent that there were greater achievements in China (Pearce et al., 2013). The Vietnamese component of the project was assessed by the author as having low achievements and low impacts. The analysis suggests that the project was poorly designed, with many of the success factors related to project design contributing to diminished project success. The analysis indicated that respondents considered about half of the success factors related to project implementation, particularly the capacity building factor, had contributed to enhanced success. Inadequate attention to the others resulted in diminished success. At the completion of the project, the scientific reports from the Australian sawing trials were not translated into a manual that could be easily understood by Vietnamese partners. The project had no mid-term review, which precluded a discussion on how the research might have been refocussed to generate outputs more aligned to end user needs. There were four design-related issues that also diminished success. Firstly, there was inadequate scoping and collaboration with Vietnamese partners in the project design. ACIAR and the Australian researchers assumed that research was necessary on the production of sawn timber, rather than on other products, such as veneer, and that there were sufficient suitable eucalypt resources existing in Vietnam to sustain a sawlog industry. Secondly, it assumed that appropriate and committed Vietnamese wood processors could be found to participate in the research and then adopt the recommended practices. However, only one small sawmill participated and it did not have the technology available to properly dry or recondition the sawn timber. Thirdly, inadequate funding was provided for the planned activities, with ACIAR reducing the project's funding by 46 per cent in the final stages of design without adjusting the magnitude of the planned research activities. Fourthly, the project leader had not previously worked in Vietnam and only became involved in the final stages of the project's design, following the retirement of the planned leader. Identifying factors that influence the success of forestry research projects implemented in developing countries
2019-04-02T13:04:14.363Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "7c01806a0f5249b0ab8a37c8a3b6235e4ff6e04e", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/forestry/article-pdf/90/3/413/17234480/cpw067.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ccb7ebef97084a6f054aebef6b096ed87d6b05e0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
233881504
pes2o/s2orc
v3-fos-license
Isolated intestinal neuronal dysplasia- type B of ileum: A rare occurrence. Intestinal neuronal dysplasia type B in the gastrointestinal tract is a rare occurrence and may occur alone or in combination with Hirschsprung disease. Distal colon seems to be a frequent site for isolated IND-B cases; however, small bowel involvement is scarcely reported. We report a case of 9 years old boy presenting with features of intestinal pseudo-obstruction for 5 years. Exploratory laparotomy revealed narrowed distal ileum with huge proximal dilatation. Histopathology of the resected terminal ileum revealed giant submucosal ganglion, hyperplastic submucosal nerves, and ectopic ganglion cells in the lamina propria suggestive of IND-B. Although IND-B involving ileum in isolation is a rare occurrence, suspicion should be kept in cases of intestinal obstruction with minimal response to conventional treatment. Background Intestinal neuronal dysplasia type-B (IND-B) is a rare congenital malformation of gastrointestinal innervation caused by dysplastic embryonal development of the enteric nervous system. The changes associated with IND-B are more common in the distal colon; however, they can affect any segment of the enteric nervous system and occur in different age groups ranging from newborns to adult, alone or in combination with Hirschsprung disease (HD) (1). Very rarely isolated ileal involvement has been reported (2). We herewith report a case of IND-type B in the ileum of a child presenting as chronic intestinal pseudo-obstruction (CIP). Case Presentation A 9 years old boy presented with intermittent abdominal pain and fullness sensation since 4 years of age. He had history of poor bowel habits with encopresis. Antenatal and natal histories were uneventful. Histories of delayed passage of meconium, recurrent vomiting, retentive posturing or blood in stool were absent. Child had 2-3 episodes of febrile seizure which remitted on its own. He had received empirical anti-tubercular therapy for duration of 11 months for the presenting complaint, although there were no other systemic symptoms. There was no signi cant family history. On general examination child's weight was 24.8 kg (3rd-10th centile), height 138 cm (10th − 50th centile) with failure to thrive. There was no pallor, icterus, edema or lymphadenopathy. On abdominal examination, inspection revealed distended abdomen with few prominent veins and visible peristalsis and borborygmi was present. There was no organomegaly on palpation. On per-rectal examination, there were no ssures, tags or gush of air and soft stool was palpable. Cardiovascular, respiratory and neurological examination showed no abnormalities. On upper gastrointestinal endoscopy, oesophagus and stomach were normal. X-ray abdomen, erect posture, showed dilated small bowel loops with multiple air uid levels. Computed tomography of abdomen showed collapsed proximal jejunum and terminal ileum with dilated lower small bowel loops ( Fig. 1). Patient was evaluated for pseudo-obstruction. Exploratory laparotomy revealed narrowed distal ileum (approximately 15 cm) with huge proximal dilatation. Differential diagnosis of intestinal pseudoobstruction, hirschsprung disease and celiac disease were kept. Resection of distal ileum, cecum and appendix measuring 13 cm in length with end to end ileo-ascending colon anastomosis was performed. No apparent dilatation of the segment was noted. Histopathology of the terminal ileum revealed giant submucosal ganglion (average 10-14 ganglion cells per ganglion), hyperplastic submucosal nerves and ectopic ganglion cells in the lamina propria. Muscularis propria was largely unremarkable however, serosal fat revealed hypertrophic nerve bers. Histopathology was suggestive of IND-B (Fig. 2). Cecum and appendix were unremarkable. On follow-up at one month, bowel prolapse from distal stoma site was noted. An exploratory laparotomy revealed proximal 20 cm of ileum anastomosed to ascending colon was hugely dilated. Another ileal resection was done till the normal calibre was identi ed with ileo-ascending colon reanastomosis. Histopathology of the dilated part showed similar ndings. At 2 months follow-up, child is now gaining weight and passing formed stools. Discussion And Conclusions The patient presented with history of CIP. On laparotomy narrowed distal ileum with proximal dilatation was present. Histopathology revealed intestinal neuronal dysplasia-type B of ileum. Similar ndings were also present in the ileal segment found dilated and non-functional during second surgery.
2021-05-08T00:04:09.016Z
2021-02-15T00:00:00.000
{ "year": 2022, "sha1": "185766bf6ba08cf652a6370c2ad71924855e738b", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-202215/v1.pdf?c=1631888874000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "d11f9d7e464eec4dc0eb0c1859842e4642dbab21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
169930863
pes2o/s2orc
v3-fos-license
Internet of Things in Industry: A Survey of Technology, Applications and Future Directions : Internet of Things (IoT) is a congregation of interconnected physical objects that are linked with unique identifiers and transfer data in the appropriate network without human intervention. It has the promising opportunity to develop powerful industrial applications. To understand the development in industrial IoT, this paper reviews various business values in industrial application, its architecture, related technical elements, benefits and challenges. Even though it delivers many business values, shop floor visibility, supply chain management, health, safety & environment, predictive maintenance, industrial digit thread, supply chain integration, human machine interface, quality control and big data are discussed in this paper. The major elements related to IoT like Identification, Sensing, communication methodology, computation methods and services are deliberated in this paper. Future research areas are gathered from various literatures and discussed in this paper. The key contribution of this work is to summarize the current contemporary of IoT in industries systematically. II. INDUSTRIAL APPLICATIONS AND ITS BUSINESS VALUE This section discusses about the application of the Internet of things from an industry perspective. A. Shopfloor Visibility Smart manufacturing offers real-time online visibility and delivers the visibility of data beyond the normal dashboard. SBEMS (Sensor based efficiency monitoring system) [10] and TBEMS (ToC based efficiency monitoring system [43] represent a real-time online shop floor monitoring system for making smart decisions. SBEMS and TBEMS brings the online visibility and efficiency of machinery and operators, machine running status, number of items produced, inventory status, rejections along with respective reasons in the cloud environment. This is achieved by installing various sensors in shop floor machines and integrated with the data acquisition in the industrial network. B. Supply Chain Management [11] Early prediction of failures through real-time supply chain information helps to reduce the inventory and capital investments. IoT in the supply chain helps the manufacturers to get the better understanding of this information. The visibility can be achieved by connecting plants and suppliers. This helps the suppliers to track the materials, remote monitoring of inventory status and the product movement. The delivery information can also be integrated with ERP (Enterprise Resource Planning), PLM (Product Lifecycle Management) and other systems. C. Health, Safety And Environment [11] Health and safety related performance indicators include illness rates, incidents, absenteeism, near-misses and property damage. In general, this kind of measurements is stored in spreadsheets and emails. The indicators do not have relational value and identification of root cause analysis becomes difficult. The industrial internet and analytics will help to identify and address this kind of health-related issues based on the data collected from various sources. D. Predictive Maintenance [10] Early monitoring of machines gives a high rate of ROI (Return On Investment) in different scenarios by improving the machine life, decreasing the asset downtime, increase in production and enabling the early prediction of services for the critical machines. E. Industrial Digital Thread [12] Service engineers often do not have right data or insights required to troubleshoot the industrial machines to perform the corrective and preventive maintenance activities. Many times, QA engineers may require right data to understand the root cause of the problem. The root cause of the problem may relate to design, manufacturing, supply chain logistics or production planning. It is very difficult to identify the problem without right data and insights. [12] discussed various types of industrial related digital application. F. Supply Chain Integration [13] Based on the profiles and processes, the supply chains are different in different industries. The impact of IoT in the supply chain will vary from domain to domain. For example, in the year 2018, the IDC predicted that the digital connectivity will improve 15% productivity in manufacturing supply chains. Application of IoT provides data about product-related inventories, the location of the consignment, ambient temperature and various other parameters. By considering all these data, organizations can take immediate action to maintain the inventory level, predict the arrival time of the material and possible delays and quality related issues. G. Human Machine Interface [14] The collaboration between the human and the interface will not only improve the productivity in the manufacturing system, but also the safety of the operator. The human and IoT device interactions, improved with the neuroergonomics approach. Neuroergonomics is related to the human brain related to the behaviour at work and everyday activities. H. Quality Control [16] The sensors gather cumulative product data and data from various phases of a product cycle. This data shares the product related information like temperature, rejections, pressure, working environment and transportation-related details. The IoT device provide data about the customer specification of the product. All these inputs can further be analysed to detect and correct the quality issues. I. Using the power of Big Data [17], [28], [29], [30], [31] Using Big data, insights can be derived which is not possible few years before. The greatest benefits of the big data in the manufacturing industries is to detect the defect of the product well before and improve the quality of the product to meet the supplies on time. The key challenges for the industries in the smart environment are the selection of appropriate IoT architecture. The next section discusses about the related IoT architecture. III. INDUSTRIAL IOT ARCHITECTURE The selection of IoT Architecture is application dependent. This section describes different industrial IoT architectures implemented for different application. The development of robotic machines, IoT principles, big data development, automation and the digital records leads to the fourth industrial revolution. The communication should bring trust between the elements of the IoT; controlling over the different parameters and finished components. [15] describes the potential ways of integrating IoT and blockchain technologies to solve the issues in the IoT connectivity. An innovative architecture has been developed by the combination of Smart-M3 and blockchain platform. Key benefit and feature of the proposed architecture is to store and retrieve the smart space elements which are required for interaction. Industrial internet architecture framework (IIAF) is defined by using the 'ISO/IEC/IEEE 42010:2011' model used in Industrial Internet consortium [16]. The IIAF identifies methodologies, procedures, and practices for a consistent description of industrial IoT architectures. This kind of framework helps easier evaluation, procedural and effective resolution to address the concerns of the stakeholder. This architecture begins from the basic framework and requires common architecture patterns to ensure the suitability to the industrial IoT application across all industrial sectors. The general architecture framework for the real-time environment requires proper transformation and extending the abstract architecture models into detailed architecture, addressing the industrial internet application model, thereby moving to the next level of architecture and system. Through the viewpoint of this reference architecture, offers guidance to system lifecycle processes from IIoT system conception, to design and implementation. The viewpoint of this model provides a framework for the system designers to anticipate continued through common architecture related issues in IIoT system design. [19] discussed the implementation of IIoT at various level. The smart manufacturing enterprises started the implementation of machine performance and smart enterprise control. This methodology will amalgamate the next generation of the Industrial IoT systems. In addition to this, the expanding power of embedded electronics, communication-related intelligence will move to the lower levels of the automation with the combination of sensors and actuators. Finally, the information technology (IT) and operations technology (OT) combined together and achieved the information-driven architecture. The architectures used in the past will not work in the future. Rami 4.0 is an IoT architecture [20] contains two layers. Exchange of information across both the layers will be transparent using semantics and data recovery based on the industry standard. The first layer is the time-sensitive layer used for real-time deterministic control This is indicated as 'fog' or 'edge'. The term time-based IP related to the technology included in this layer is equivalent to the same IIoT technology used in the enterprise cloud layer. But this communication technique is optimized for the real time. In the second layer, the devices are connected with sensors, actuators, and controllers in the cloud. The intelligence added to the devices. The second layer is the cloud enterprise layer. It includes the connectivity with various enterprise applications like ERP (Enterprise Resource Planning), SCM (Supply Chain Management), CRM (Customer relationship management) etc. Michael Weyrich [22] discussed the Reference Architecture (IIRA) has a strong industry focus. The Internet of Things-Architecture (IoT-A) provides a detailed view of the IoT's information technology aspects. Major standardization is happening in M2M communication, employing of client, scalable, and secure communication stacks. This standardization is based on a modified Open Systems Interconnection (OSI) stack and proposes specifications for the data link, adaptation, network, and transport layers. The IoT architecture is application specific. Understanding the various elements related to IoT will help to understand the functionality of the architecture. Next session discusses various IoT elements. IV. IOT ELEMENTS Understanding the basic building blocks of IoT will help to gain good knowledge and understand the functionality of the IoT. The major elements of IoT are discussed in this section. A. Identification Methods Identification of objects is important for the IoT. Several identification methods are available such as uCode(Ubiquitous codes) and EPC(Electronic product code). IPv6 and IPv4 are different addressing methods used for IoT objects. 6LoWPAN is the combination of IPv6 and low power wireless networks. The address helps to identify the objects uniquely. The addressing helps to identify the objects uniquely. Within the network, identification methods are used to provide clearly identify for every object. B. IoT Sensors A sensor [23] is a physical device provides status the physical process in a measurable way. Smart sensors are different from the usual sensors. These type of sensors are embedded in microprocessor, storage, diagnostic tool and it passes the traditional signals in the form of real digital insights. It provides time and valuable data to power analytical insights and provides a good improvement in cost, performance, and good customer experience. The speedier transformation of physical information can increase the range of opportunities for higher performance, increased capacity, maximum reliability, and innovation. C. Communication Connectivity technologies provide the links that make the IoT possible. Both wired networks and wireless communication play important roles. Wired fast Ethernet with IP addressing is spreading into areas such as sensor links which previously communicated via simple circuits or proprietary protocols. Mobile communications are taking over from wired networks when the speed and capacity are adequate. Local area networks based on Wi-Fi 802.11 fill in the wireless gaps between short-range communications technologies such as RFID and main Internet connections or cellular networks. When IP addresses are assigned down to the sensor level, data analysis and processing become flexible and adaptable. If more data points are needed, additional data can be obtained from the sensor by fetching the readings more frequently. Changes in the process become changes in software applications. Wired Ethernet is still the workhorse of factory automation and has the unparalleled bandwidth, speed, and reliability. Possible cable damage and broken connections are a disadvantage that designers can address the mechanical protection of wires and software checking of links by regularly calling up the IP addresses of linked devices. High immunity to interference from electrical machinery is an added benefit [26]. D. Computation Methods The processing units represent the computational ability of the IoT. Arduino, Raspberry PI, Gadgeteer, Cubieboard, Z1WiSense and T-More are various hardware platforms used for running the IoT applications. Various softwares are available to deliver IoT functionalities. RTOS (Real time operating system) is useful for developing IoT applications in real time. TinyOS, LiteOS and Riot OS offer light weight OS developed for IoT environments. To achieve the vision of IoV (Internet of Vehicles), some automotive leaders established OAA(Open Auto Alliance) and planning to facilitate new feature to the Android platform. Cloud computing help to transfer their data to the cloud to prepare the big data in the real time. Cloud-centric IoT presents the idea of cloud computing forming the core of IoT with users, sensor networks, middleware and private clouds completing the paradigm [33]. considering IoT from a scaled back perspective, such a representation becomes accurate. For SSGs consideration must be given to smaller infrastructure that require IoT cloud [34]. E. Services The IoT services can be classified into Identification services, Data collection services, Collaborative aware services and Ubiquitous services. Identification services is an important service, it identifies the objects to connect to the real world. Data collection services handle the data collection and summarization of raw data which is required for further processing and reporting to the IoT application. The decision making and action is handled by the collaborative aware service. This helps to provide information anywhere, any time to anyone. Ubiquitous services will make it possible to reach the services globally. Addressing the challenges in this service is very difficult. Hence, many applications provide only the services, except ubiquitous. V. BENEFITS IN INDUSTRIAL IOT Following are the benefits of implementing IoT in the manufacturing production [27] 1) Predictive Maintenance: The usage of IoT in the manufacturing line will improve the machine uptime and minimize the failure of the machine by predicting the failures even before they occur. This will improve the productivity and revenue and decrease the production cost. 2) Data Analytics: Deriving insights from the factory data is possible. The unstructured data transformed into useful information and further used to derive useful smarter business decisions. Thus the capability of data analytics is ranked as an important feature for the IoT solution. 3) Higher Customer Satisfaction: With the implementation of IoT in industries, the SCM (supply chain management) and production will be agile. The problem related to out-of-stock will be minimized and online and real-time response to the demand is possible. This will help the customer 's request for appropriate products which will improve the customer satisfaction rate. 4) Gaining Competitive Advantage: The benefits stated above will impact on the organization's competitive advantage. Being an early adopter might get the industry in important position in the future. VI. BARRIERS TO IMPLEMENTING THE INTERNET OF THINGS IN MANUFACTURING PRODUCTION Various challenges and barriers in implementing the IoT in the manufacturing environments are discussed below 1) ROI Estimation: Estimation of ROI for the Internet of Things is a major barrier for potential adopters. IoT is a latest and abstract technology, the industries must invest both IoT and integration with the currently available system. Large-scale designing is required which will modify the functionality of the current system in place. 2) Cyber Security: Introduction of IoT system would generate huge amounts of data called as Big Data. Further, this big data can be transferred through the cloud for data analytics and the insights can be generated. Replication of the system to the whole factory with its machines can be extended with the help of wireless controllers also. The growing number of points increases the opportunity for cyber-attacks. This needs to be addressed well. 3) Cultural Resistance: Many industries will have the barrier to convert the mindset of the employees in the adoption of the IoT. The main reason behind this is the people are afraid that they will be substituted by intelligent systems and they will become a liability instead of a useful manpower resource. This problem can be eliminated by including the staff early in the implementation process and provide proper education and training. So that the employees can get the skills required for the latest smart manufacturing system. 4) Structural Problems: Sometimes the industry's existing infrastructure will become the obstacle for the IoT implementation. If the industry's internal dynamics are not adjustable and agile, this kind of new concepts will become impossible and risky. The complexity of IoT implementation depends upon the size of the manufacturing facility. The IoT transformation includes the internal dynamics, production process, and customer relationship. The complete digital connectivity will be realised to the industries which have a more agile manufacturing facility. VII. FUTURE RESEARCH DIRECTIONS In this section, some of the problem areas require further research are discussed below. 1) Scalability: The number of connected devices will become more due to the implementation of IoT. The major problems are devices naming, access authentication, maintenance, and protection. The biggest question in front of the researchers are a selection of protocols, standards, energy sources for the devices and architectural model to support the heterogeneity of things and related applications. 2) Architecture And Dependencies: To connect a huge number of things (objects) are connected, it requires an appropriate architecture to provide easy connectivity, proper communication technology, control and application programs. The application decides the selection of IoT architecture. In most of the cases, it is application dependent. The identification and correction of dependency problems in the architecture have large scope in the area of research. [42], [41], [40], [39], [38], [36], [35], [37]: The huge amount of raw data being collected require appropriate technique to convert the raw data into useful knowledge. Selective computational techniques [28], [29], [30], [31] are required to detect and remove the dirty data from the file. The huge scope is available in the improvement of computational techniques. More research possibilities are available in this area. 4) Interoperability: Most of the sensor related systems are closed systems. To achieve the benefit of the IoT, it requires openness in the system to provide real-time online information. A new type of unique communication interface is required to activate the efficient information across different types of systems. A new types of techniques and theory are required to achieve openness. Remote access to various industries or to a specific product is useful to the industry. It leads to the research direction of secured data transmission. 5) Security: The security-related issues are major in IoT because of the physical accessibility of sensors, actuators, wireless communication and openness of the systems [32]. The security-related problems lead to serious consequences, creates damage, disruption of operation or in some cases, even loss of life. 6) Privacy: The IoT paradigm must be able to deliver the users request for data access and unique policies to be created and evaluated in order to decide which access to be provided or denied. The data privacy should be decided based on the application. 7) Human in the Loop: Humans in the loop will involve humans and operate synergistically. Even though having humans in the loop have its benefits, simulating the behaviors of the human is a big challenge due to the complicated behavioral aspect of human beings especially in the industrial environment. New research is required to understand where humans can directly control the machines, taking appropriate actions, physiological parameters of the human are modeled and supervisory control. VIII. CONCLUSION In a complicated cyber-physical system, IoT mixes different machines equipped with sensing, identification, network and communication methodology. In general, sensors and actuators are becoming very powerful with a low cost and smaller in size. Industries are keen on setting up IoT setup for industrial applications like automated monitoring system and predictive maintenance. Due to the fast development in technology and industrial infrastructure, IoT is expected to be one of the important application in industries. Recent researches on IoT in industries are reviewed in this paper. The application in industries related to its business value is discussed first. Next, different IoT architecture in the industrial application is analysed. Afterward, the IoT elements required for the IoT implementation are discussed. Finally, the benefits, challenges and the future research direction related to IoT are discussed for the benefit of the future IoT researchers in the industrial sector.
2019-05-30T23:46:49.042Z
2019-01-31T00:00:00.000
{ "year": 2019, "sha1": "2497e13934e53fe6276f2367332608eb7da03fd6", "oa_license": null, "oa_url": "https://doi.org/10.22214/ijraset.2019.1009", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e010ca748f0b217681c6d507d38704c4548e2ed9", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Business" ] }
214802312
pes2o/s2orc
v3-fos-license
Newman-Penrose scalars and black hole equations of state In this work we explore the connections between Newman-Penrose scalars, including the Penrose-Rindler $\mathcal{K}$-curvature, with the equation of state of asymptotically Anti-de Sitter Reissner-Nordstr\"om black holes. After briefly reviewing the equation of state for these black holes from the point of view of both the Extended Phase Space and the Horizon Thermodynamics approaches, a geometric splitting is given for such an equation in terms of the non vanishing Newman-Penrose scalars which define the $\mathcal{K}$-curvature at the horizon. From this splitting, a possible thermodynamical interpretation is developed for such scalars in the context of the black hole thermodynamics approaches initially discussed. Afterwards, the square root of the Bel-Robinson tensor is employed to propose conditions at the horizons in terms of pressures or energy densities, which can be understood as alternative thermodynamical definitions of these surfaces. tures of the tidal and frame-drag fields in terms of K [11][12][13]. In a different but related context, theoretical evidence has accumulated suggesting a deep relationship between gravitation, thermodynamics, and quantum theory [14,15]. Recently, new perspectives [16] with respect to the thermodynamical role of the cosmological constant in the context of BH physics, first studied in [17,18], led to the realization that BH thermodynamics is a much richer subject than previously thought. Namely, variations of this parameter for asymptotically anti-de Sitter (AdS) BHs, defined globally through approaches such as [19], allow for the introduction of readily defined pressures, volumes, phase behaviors, etc., in such settings. The analysis of the resulting thermodynamics leads to the extended phase space (EPS) approach for BH thermodynamics (for further details, see [20]). As a general feature, here we remark that asymptotically AdS BHs are found to be quite analogous to Van der Waals (VdW) fluids within this context. Now well, to further pursue the aforementioned connections, in this work we provide a thermodynamic interpretation for the K-curvature, together with other relevant scalars defined in the SC framework, in terms of the equation of state (EoS) of AdS-Reissner-Nordström (RN) BHs. This interpretation is motivated by a geometric splitting of such equation that let us associate certain combination of Newman-Penrose scalars to each term of the EoS. Interestingly, we find that the EoS corresponds intrinsically to an important theorem that related K with the Gaussian curvature of a 2-surface, applied to the BH horizon. The identification is performed within the EPS framework and then a comparison with another approach to BH thermodynamics, the so-called Horizon Thermodynamics (HT) [15], is considered. Although it is clear that a thermodynamic interpretation for the NP scalars could be done directly from the laws of BH dynamics, which can be interpreted as thermodynamic under certain assumptions such as stationarity, it is not straightforward to assign the role of pressure to any quantity within this context [21]. Therefore, it is necessary to include prescriptions for the identification of pressure such as those present in EPS or HT, and this fact can be seen as a justification for our approach. Although much of our discussion is framed in the context od AdS-RN BH, mainly because they are the most general static solution for an Einstein-Maxwell system, there are some features of our results that can be extended to other static settings in a straightforward way, and this possibility is discussed along the text. After presenting the identifications in EPS and HT, we venture to propose an additional identification for NP scalars in terms of pressures uniquely, taking into account the preceding results and a definition of pressure based on the square root of the Bel-Robinson (SQBR) tensor [22]. To end, we show that the K-curvature of the horizon can be decomposed in terms of the SQBR tensor and the aforementioned pressures associated now with the Maxwell and AdS sector. An equivalent proposal is considered in terms of energy densities. We follow these findings with a discussion of their plausibility and their relation with previous research in the literature. The manuscript is organized as follows. Section II briefly summarizes both the Extended Phase Space (EPS) and Horizon Thermodynamics (HT) approaches, from which the EoS for an AdS-RN BH is derived. The geometric splitting of the EoS in terms of the K-curvature, which is one of our main findings, is discussed in Section III. while Section IV introduces a thermodynamical condition at (or defining) the BH horizons in terms of pressures associated with K by considering the SQBR tensor. Afterwards, Section V is devoted to discuss our results and possible future work, with some additional final remarks given in Section VI. We use units where = c = k B = G = 1, and our signature is (+ − −−). Conventions regarding Riemann tensor, Einstein equations, and the definition of scalars follow those of Penrose and Rindler [3]. II. BLACK HOLE EQUATIONS OF STATE To provide a self-contained context for our results, this section is devoted to review the main ideas leading to the concept of BH EoS using both the EPS and HT approaches. Emphasis will be given to the construction and interpretation of the EoS for the AdS-RN BH. A. Extended Phase Space In the EPS approach [16,21,[23][24][25][26][27][28], also known as Black Hole Chemistry [20], we consider a BH spacetime with an AdS background, where the cosmological constant, λ, is allowed to take different values. By comparing the asymptotically AdS spacetime with cosmological constant λ with another spacetime of the same class, with corresponding constant λ + dλ, the first law can be written as dM = T dS + V t dP λ + φdQ + ..., where V t is a thermodynamical volume [20] conjugate to the pressure P λ , which is defined as P λ = − λ 8π . An important consequence of this extended first law is that the mass M is identified with the thermodynamical enthalpy H, i.e: H(P, S) = M [16], instead of the internal energy U , as it can be read from the intensive and extensive variables of the new first law. Therefore, the corresponding conjugated variables are given by T = ∂M ∂S P λ and V t = ∂M ∂P λ S , respectively, as expected from the usual thermodynamical theory. One of the main advantages of the EPS approach is the possibility of readily obtaining EoS for different AdS-BHs. For instance, the EoS for the AdS-Reissner-Nordström (AdS-RN) BH can be obtained as follows. The AdS-RN BH, in four dimensions D = 4 and written in conventional coordinates, is characterized by the metric where dΩ 2 is the usual metric for the two-sphere and where the cosmological constant in terms of the AdS radius, l, is given by λ = − 3 l 2 . The BH horizon, r + , is defined from g rr (r + ) = f (r + ) = 0. From this horizon condition, the BH mass M can be expressed in terms of the parameters (Q, r + , l 2 ), as As commented before, the importance of Eq. (3) lies in the fact that it can be identified with the enthalpy, as argued within the EPS approach. Therefore, we can obtain both the temperature and the thermodynamical volume of the AdS-RN BH by the standard expressions T = ∂M ∂S P λ and V t = ∂M ∂P λ S which explicitly yield and, after noting that l −2 = 8πP λ /3, respectively. Having already defined the pressure P λ , and obtained the temperature T and the thermodynamical volume V t , an EoS can be constructed. In order to do this, let us define a specific volume v = 2r + (l p = 1). Then, Eq. (4) can be written as a VdW EoS for the AdS-RN BH in terms of this specific volume as [20], Let us recall that the specific volume, in the context of usual thermodynamics, is given by v = Vt N , whereN should be understood as a number of particles. In fact, we can associate a number of particles to the horizon from a thermodynamic point of view, following in spirit the ideas within [29][30][31][32][33], from which an interpretation of AdS-RN BHs in terms of a VdW gas can be proposed. Provided we know the thermodynamical volume is given by Eq. (5), then the particle numberN should be proportional to r 2 + . This sets the proportionality constant as 4π/6. Even more, when demanding consistency with v = 2r + , then the number of particlesN can be written as A/6, where A is the area of the horizon, which could be interpreted as a realization of the Holographic Principle. Finally, with the help ofN and the thermodynamical volume, V t , the EoS for the AdS-RN BH formally coincides with that of a VdW EoS (including a second virial term), reading From this equation, it follows that a corpuscular microscopical interaction model could be proposed to provide an statistical mechanical foundation to this result. This possibility has been recently addressed in Ref. [29], in which both the equation of state given by Eq. (7) and the Bekenstein-Hawking entropy of a D-dimensional AdS-RN BH are recovered, using techniques from statistical mechanics and employing certain heuristic gravitational constraints. B. Horizon thermodynamics The origin of HT was the realization [34] that the Einstein equations on the horizon of spherically symmetric spacetimes can be interpreted in terms of the first law of thermodynamics. This relevant observation has been extended to other cases corresponding to different gravitational theories and symmetries (for a review see [15,35]), and also to the study of the thermodynamics of null surfaces [36]. The basic idea of HT is based on the identification being T µν the energy-momentum tensor of the complete matter sector (including a possible cosmological constant) evaluated at the horizon. Then, under the assumption of an Euclidean (thermodynamic) volume for the BH, the radial Einstein equation can be interpreted as an EoS, P tot = P tot (T, V ) for spherically symmetric AdS BHs [37]. Interestingly, this EoS does not depend on the specific form of g tt = g rr and the specific matter content is relevant only when interpreting the results. Even more, the authors of Ref. [37] derive the first law of HTs for Lovelock-Lanczos theories in the form dE = T dS−P tot dV , where E is an energy associated with the BH whose meaning is discussed below. In addition, the horizon enthalpy and Gibbs energy are defined according to the standard prescriptions G = E − T S + P V and H = G + T S. In the case of AdS-RN BHs we are interested in, the main differences between the EPS and HT approaches are: (i) the work term φdQ which is present in the EPS approach contributes, within HT, to the total pressure associated to the matter fields; (ii) in HT, the BH volume is assumed to be the Euclidean geometrical volume, being independent on the matter sector in contrast with the EPS approach, in which the volume is conjugate to the pressure given by the cosmological constant and depends on the matter content of the theory; (iii) regarding the quantities E, H, M defined in HT, the authors of Ref. [37] make the following proposal: M is the standard BH mass and E is the horizon curvature energy (the energy required to warp spacetime so that it embeds an horizon). Interestingly, E vanishes for planar and toroidal BHs and can be negative for hyperbolic and higher genus BHs. From a geometric point of view, it has been noted that E is related with the transverse geometry of the horizon [38] and with the generalized Misner-Sharp mass (evaluated at the horizon), M(r + ), which is given by [39], For any matter content, it has been shown [40] that M satisfies the generalized first law [41]. Regarding the enthalpy, for AdS-RN BHs we have that [37] M = H + Qφ + 2P m V , where P m stands for the pressure corresponding to the matter sector. Therefore, only when P m = 0 we have H = M = E + P λ V . Finally, we recall that only in vacuum and for P tot > 0, both the EPS and the HT approaches yield the same kind of thermodynamic behaviors and phase transitions [37]. Finally, following [37] it is easy to see that the EoS for AdS-RN BHs is given by where V is given by Eq. (5) and with is the radiation pressure exerted on the horizon due only to the Maxwellian source terms. Finally, we note that, although Eqs. (7) and (10) coincide, the thermodynamical behavior which they describe is different, as pointed out in [37]. III. EQUATION OF STATE AND K-CURVATURE In this section, a geometric interpretation for the EoS describing an AdS-RN BH is developed. In particular, it is shown that the mentioned EoS can be derived from the concept of K-curvature developed by Penrose and Rindler [3]. Let us consider a spherically symmetric and static geometry describing a BH within GR (including a negative cosmological constant, λ) coupled with Maxwell electrodynamics. Using the Schwarzschild ansatz we can write After choosing the following null tetrad with l · n = 1 and m ·m = −1, with the bar denoting complex conjugation, the only non-vanishing NP symbols for the static Einstein-Maxwell system are Ψ 2 = C pqrs l p m qmr n s , Specifically, for a metric given by Eq. (13) we get At this point, a couple of comments are in order. First, note that, after introducing the Misner-Sharp mass, M(r), as the following relation can be obtained: whose validity extends beyond our metric form (13), as discussed below in Section V. Second, the Komar energy, where dS µν denotes the surface element on S 2 (r) and ξ µ = (1, 0, 0, 0) is a timelike Killing vector, can be written, as checked by explicit calculation, as Therefore, using Eqs. (18) and (20), we obtain a relation between the Komar energy and the Misner mass Let us now consider the so-called holographic energy equipartition [15] which, for a static spacetime, reads where h and σ are the induced metrics defined on V and ∂V, respectively, T loc stands for the local Hawking temperature measured by an observer at rest in this spacetime and ρ K is defined as a Komar energy-density. The symbol= is used to specify that the equality is only valid at the horizon, r = r + , and it will be used with such meaning from now on. Following [15] we can attribute Even more, Eq. (22) can be written on the horizon, r + , as where and N = A = 4πr 2 + . In the AdS-RN case, by taking the trace of Einstein equations we get Therefore, Eq. (21) can be written as Thus, if the pressure P λ is identified with 3 8πl 2 , which is the essential assumption of the EPS formalism briefly summarized in the previous section, and after considering that E K= 1 2 N T = 2T S, where S is the entropy of the BH, Eq. (25) can be written as where ω is, for the moment, a pressure contribution defined as is the areal volume, which corresponds, in the EPS approach, to the thermodynamic volume for AdS-RN BHs, as previously stated. Up to this point, some comments are in order. First, note that the standard Smarr relation for AdS-RN BHs is recovered. In the uncharged case we get Second, in the charged (Maxwell) case we have where φ is taken to be the electric potential at the horizon. And third, Eq. (26) can be written as the VdW EoS given by Eq. (6) or Eq. (7). Now, we will connect the previous EoS with the geometric quantities used in the SC formalism, which is one of our main results. Penrose and Rindler [3] introduce the (complex) K-curvature of any spacelike two-surface in spacetime: where σ = m a m b ∇ b l a , λ =m amb ∇ b n a ,ρ = m amb ∇ b l a and µ =m a m b ∇ b n a , are spin coefficients related to the expansion and shear of the null congruences with tangent vectors l a and n a . Even more, they show [3] that where k g is the Gaussian curvature of the considered two-surface andK is the complex conjugate of K. For an AdS-RN BH horizon (generated by a shear-and expansion-free null congruence, whereρ = 0 = σ = λ), Eq. (31) reads where the first equality is provided by explicit computation and proves that theorem (31) is, when evaluated at a horizon, equivalent to the vanishing of the metric function f . Once the corresponding values for the NP scalars for the AdS-RN solution are computed, we write the mass as a function of the temperature using Eqs. (3) and (4), obtaining Then, introducing Eq. (33) in Eq. (31) we obtain Eq. (7), which we remind the reader is given by In this equation of state, the following identification can be performed which can be taken as the geometric splitting of the EoS for AdS-RN BHs in the EPS setting. It is important to note that this identification is not unique, but it depends on the thermodynamical framework in which one is working. For example, in the HT approach we have that the pressure is defined in terms of T r r , as reviewed above; in this context, the corresponding identification is given by The main difference, as discussed in [37], lies in the fact that the HT pressure includes contributions from all matter sources; thus, we can not split NP scalars as associated to the cosmological constant and the radiation pressure from a thermodynamic point of view. In addition, we must remark that the independence of the thermodynamic description with respect to the explicit matter sources that characterizes HT is recovered in these latter results; therefore, the extension of these identifications to situations with additional sources such as scalar fields is straightforward. Summarizing, the proposed identifications let us conclude that the equation of state given by Eq. (7) is nothing but the theorem expressed by Eq. (31) linking the K-curvature with the Gaussian curvature of the horizon of an AdS-RN BH. Specifically for the EPS case, the NP scalars are involved in the following way: Λ corresponds to the pressure, the Komar density corresponds to the kinetic term of the ideal gas, the complex curvature K corresponds to the VdW "interaction term" and Φ 11 corresponds to the second virial term which describes the Maxwellian radiation pressure exerted on the horizon. In the case of HT, there is no second virial term since it has to be regarded as part of the thermodynamical pressure. This fact leads to striking differences between the two approaches with respect to the phase behavior and the interpretation of the results, that are analogous to the results of [37]. In addition to these points, the correspondence that we obtained permits, in principle, to see how the geometric information encoded in the NP scalars "emerges" from statistical mechanical models such as the one proposed in Ref. [29]. The corresponding development for the interaction term was already discussed in detail in that reference and for the Komar energy in [30]. IV. PRESSURE/ENERGY DENSITY CONDITIONS AT THE HORIZON From the set of non-vanishing NP scalars for our static spherically symmetric case, {Λ, Ψ 2 , Φ 11 }, and their combination in terms of K, we have identified Λ and Φ 11 with pressure terms, the first corresponding to the cosmological pressure and the second one to the Maxwellian radiation pressure exerted on the horizon. In addition, we established that these identifications are not unique but depend on the thermodynamic framework in which the BH are described. Our purpose in this Section is to take this idea further and provide another framework in which pressure-like interpretations are considered for the whole set of NP scalars, with the consequence that it is possible to define the event horizon in terms of a sum of these pressures. By virtue of the respective equations of state for each element, such a sum can also be understood as a condition on the total energy density, which is interesting and deserves to be discussed. Let us now motivate our subsequent discussion by fixing our attention in the second equality of Eq. (32), which, remembering that this equation is equivalent to the vanishing of the metric function at the horizon, reads 1 2r 2 (38) Notice that, since we are working with units such that 4πǫ 0 = 1, the third term of Eq. (38) represents the radiation pressure exerted on the horizon, which constitutes the matter contribution, P m , relevant for this case under the HT approach, as previously stated. That is, If we naively define an energy density, ρ M , as M/V , then Eq. (38) reads 3 4π or where the energy density associated with the cosmological constant, ρ λ = −P λ has been introduced in order to facilitate the interpretation of Eq. (41) and we have introduced a horizon curvature pressure on the horizon: This pressure was defined originally in [37] for general horizons and can be associated to their curvature since its sign depends on the sign of the 2-curvature of such surfaces, and also because it vanishes for planar horizons. It is useful to note, given the upcoming discussion, that a local equation of state can be constructed for this pressure by considering its relation with the horizon curvature energy mentioned above. Namely, we can define a horizon curvature energy density for this energy, which is given by E = r+ 2 , as with V t the Euclidean volume considered in HT. From this expression, explicit calculation leads to the following equation of state for the horizon curvature variables At this point, one could be tempted to read Eq. (41) as some kind of thermodynamical condition on the horizon but, in order to do that, an object defining "a pressure related to M " has to be introduced. In fact, this is largely artificial since both M and Q contribute to the Riemann curvature. In this line of thought, what makes more sense is to split the Riemann curvature as usual in its traceless and matter parts using the Weyl and Ricci curvatures. Following this argument, here we will see that Eq. (41) and, therefore, Eq. (30), can be written in terms of the gravitational energy-momentum by using the SQBR tensor [22]. For a generic spacetime, the Bel-Robinson tensor is defined as [42][43][44] T where C aecf corresponds to the Weyl tensor and * denotes the Hodge dual in four dimensions. The BR tensor is completely symmetric, traceless and covariantly conserved in vacuum. Given a generic timelike congruence, u a , a super-energy density can be defined as W = T abcd u a u b u c u d . Even more, as the BR tensor has dimensions of [L] −4 , a properly defined square root could account for a possible definition of the energy-momentum tensor for free gravitational fields [22]. The SQBR tensor, t ab , is a symmetric, two-index tensor which is solution of [45] T abcd = t (ab t cd) − 1 2 t e e t (ab g cd) + 1 24 In fact, t ab + Hg ab , where H is an arbitrary function, is also a solution of Eq. (46). Exploiting that the solutions we are interested in are classified within the Petrov-D type, then t ab can be written as [22] t ab = 2 c |Ψ 2 | l (a n b) + m (amb) + Hg ab , where c is an arbitrary constant introduced in order to compare with different possible conventions as in Ref. [46]. Different choices for the arbitrary function H have appeared in the literature, basically depending on the vanishing of the covariant divergence of t ab , u a ∇ b t ab . On one hand, the form of H consistent with covariant conservation has been considered in Refs. [47,48] and, on the other hand, H = 0 has been chosen in Ref. [46] in order to secure a traceless t ab and, therefore, a massless carrier of the gravitational field. A fluid-like interpretation of the SQBR tensor can be specified once a timelike congruence u a , with u a u a = 1, has been chosen. Such congruence is interpreted as a family of observers carrying a four-velocity u a . If the associated orthogonal projector is h ab = g ab − u a u b , any tensor (in particular the SQBR tensor) can be decomposed as For the SQBR tensor, µ g , q a , P g and π ab can be taken to be the energy density, heat flux, isotropic and anisotropic pressures, respectively, associated to gravitation and measured by the observer u a . Note that these identifications are strongly linked with the identification of the SQBR tensor as the object that describes the energy and momentum of gravitation. In the frame adapted to the principal null directions (the comoving frame), and choosing H = 0, these thermodynamic quantities read [46] µ g = c |Ψ 2 |, where the triad {x a , y b , z c } is a set of orthonormal basis vectors (see [46] for details). Interestingly, the EoS is also valid for type N spacetimes, being invariant under general Lorentz transformations [46]. On the contrary, if u a ∇ b t ab = 0 is taken as the main constraint of the SQBR tensor, then H ∼ −Ψ 2 and P g = 0 [47]. Let us now fix our attention in the first equality of Eq. (32), which reads If we choose H = 0, and consider that Ψ 2 is negative for well behaved AdS-RN BHs, then Eq. (51) can be written as which can be interpreted as a pressure constraint that defines the horizon. Note that the combination of signs which appears in Eq. (53) is determined by those of Eq. (30). Finally, we note that Eq. (53) clearly shows that the curvature pressure at (or defining the) horizons can be decomposed on matter (including cosmological) and gravitational components. We would like to stress again that Eq. (53) is completely equivalent to Eq. (32), which can be used to define the horizon. Then, r = r + is a horizon for the AdS-RN BH when i=g,m,λ,σ where the corresponding pressures have been defined above and α i are the respective numerical factors. It is important to note that, by virtue of the EoS for the different components, it is possible to obtain a energy density version of (53) which has an interesting form. Namely: As discussed above, c is introduced as a dimensionless numerical constant whose introduction is motivated mainly by the comparison between conventions; however, we see that its effects are non-trivial in these equations. Specifically, if we define a (local) total energy density for the horizon as ρ net = ρ m + ρ λ + ρ σ + µ g , it follows from (55) that Thus, it is evident that c has an important effect in the value of the total energy density. As an example, we can consider the case c = 3 4π , where Eqs. (52), (53), and (55) take the form, and This equation states, under the suppositions described, that the horizon could be defined as a spherical surface whose horizon energy density equals the remaining energy content, including gravitation. At this point, it is important to consider whether additional conditions exist which could lead us to expect some value for µ g , since this condition amounts to a physical argument to choose a value for c. This is an interesting point that requires further research. Nevertheless, we must remark that our findings show that, in any case, an horizon can be defined through a thermodynamical condition on the total energy in terms of the energies of the different elements that compose the system, this statement being also valid for the pressure condition (53). In particular, for this condition, our result certainly can not be interpreted a priori as an equilibrium condition, as evident from the coefficients in the pressure equation; instead, these coefficients are more general and could be associated to a condition on decoupled substances in contact. In a broader sense, we can conclude that there is another way to define horizons, from a thermodynamical point of view, in addition to the usual considerations about the Einstein equations as either realizations of the first law (HT) or EoS, and the interpretation of geometrical charges as thermodynamical potentials such as enthalpy (EPS); namely, as surfaces where energy densities or pressures of the constituents must satisfy a specific condition. It is expected that this rationale extends in static spherically symmetric metrics beyond our AdS-RN setting, by identifying the appropriate pressure and energy density terms for the additional sources in the context of hairy black holes, for example. From our point of view, microscopical models could be introduced to provide an statistical foundation for these conditions in the context of emergent gravity, although we consider that the absence of a temperature term leaves a greater range of possibilities open for these models than in the other approaches to BH thermodynamics, which is an issue when trying to restrict the microscopic models from which gravitation could emerge. V. DISCUSSION In this section we make some remarks with respect to the obtained results and discuss the different connections with previous work in the literature. In particular, we will focus in two aspects: the relation of our EoS in terms of NP scalars with the general laws of BH dynamics first laid out by Hayward in the language of SC [10]. Finally, we give some remarks with respect to the thermodynamic identifications we found, the differences between different approaches such as HT and EPS, and give some conclusions. First of all, it is necessary to discuss our results in the context of previous work by Hayward [10]. As we pointed out before, this work states the laws of BH dynamics in terms of SC. These laws have an essentially geometric character, so it is interesting to remark the similarities and differences with BH thermodynamics in terms of EoS, beyond the lack of prescriptions for the identification of pressures in such geometric setting that we mentioned above. Hayward's work is based on the study of the evolution of spacetime along null congruences; in fact, in previous work [49], Hayward also constructs the socalled dual-null dynamics of the gravitational field along this guideline. For our purposes, we summarize here some results of these works that we consider relevant for our discussion. In equation (18) we associated the Misner-Sharp mass to a certain combination of NP scalars; in fact, this relation is a particular case of the Equation (21) of [10], which states that where M is to be understood, in this context, as the double-null Hamiltonian which reduces to Misner-Sharp mass in the spherically symmetric case, and τ = m a n b ∇ b l a , τ ′ =m a l b ∇ b n a are spin-coefficients related to the twist of the null congruences, and vanish for the spherically symmetric spacetime that we are considering. S is a compact 2-surface with area A and area form * 1, where the Hogde dual is understood for forms defined on S. χ is fixed from the normalization of the spin basis, in this case χχ = l a n a = 1. After replacing the definition of the K-curvature, it is found that When we consider S to be a constant radius 2-surface in a spherically symmetric spacetime, Equation (18) is recovered. Thus, upon identification of the NP scalars with pressures we have the corresponding expression (61) that applies to general spacetimes. It remains to be seen if a thermodynamical interpretation can be provided for τ and τ ′ in stationary spacetimes, which we consider necessary for the analysis of rotating spacetimes within our approach. More connections can be identified between our results and the approach of [10]. In fact, one could ask whether thermodynamical approaches such as EPS or HT can be reconstructed in the language of SC and dual-null dynamics. This is indeed the case for HT. As reviewed above, this framework is constructed from the radial Einstein equation in a spherically symmetric spacetime, therefore it is to be expected that one of the equations describing the evolution of SC can equivalently fulfill such role. This equation is the cross-focusing equation, Eq. (17) of [10]: where þ ′ can be understood as a scaling-invariant version of the directional derivative along the null vector n a 1 , whereas ð ′ is the corresponding object for one of the spacelike directions on the sphere,m a (see [3] for more details). In the case of a horizon in a spherically symmetric spacetime, the vanishing of the spin coefficients mentioned above implies that this equation reduces to Hayward [10] argues that the trapping gravity, which in the dynamical BH laws is analogous to the temperature, is defined in terms of þ ′ρ ; furthermore, explicit calculation for the metric (13) shows that this object is equal to f ′ (r + )/2r + on the horizon, and then proportional to the BH temperature. From this fact, Eq. (63) can be taken as the starting point for HT by considering the definition of K, Eq. (30), evaluated at the horizon: This equation, together with (63), implies that The contributions associated to the HT EoS can be recognized in this equation. By Penrose's theorem (31), we know that K = k g /2; in addition, as explained in [37], internal energy in HT is proportional to the curvature of the horizon, k g , so we have that the K term in this equation provides the internal energy term of the integrated first law in HT. Furthermore, we identified before, in Eq. (36), that −(Φ 11 + 3Λ) is the pressure for AdS-RN BH within the HT approach according to the cosmological constant and radiation pressures; therefore, we obtain that this identification for the pressure in terms of Φ 11 and Λ applies to any spherically symmetric BH. Interestingly, this definition provides a connection with energy conditions in this context since Φ 11 + 3Λ ≥ 0 is one consequence of the dominant energy condition, so we have that the HT pressure in a spacetime that obeys this energy condition must be negative. This identification provides a way to cast BH dynamical laws in thermodynamical terms; for example, Hayward topology theorem [10] implies, in these terms, that a BH with negative pressure, necessarily must have positive curvature energy (equivalent to k g > 0). Since many results in [10] are a consequence of the dominant energy condition, thermodynamical statements for HT pressure in spherically symmetric spacetimes can be readily obtained. Although we found interesting connections between Hayward's BH dynamical laws and the BH EoS in terms of NP scalars, it is important to note that there are differences; in particular, regarding the laws with an explicit dynamical component. For example, Hayward's first law, Eq. (10) of [10], establishes that the area form of the horizon, * 1, changes proportionally to the square root of the NP symbol Φ 00 , which is zero for the type-D spacetimes we are considering. The character of this law is fundamentally different from the first law of BH thermodynamics in the HT and EPS approaches; in the first case, the variation of area (entropy) is connected to virtual displacements of the horizon, whereas in the second one, we compare asymptotically AdS solutions with slightly different parameters, as mentioned before. In fact, this difference has been pointed out before and its importance lies in that horizon area and entropy are not connected in dynamical spacetimes in a straightforward way. In spite of this issue, it is remarkable that the dynamical first law can give us lessons that are relevant for the BH thermodynamic approaches we are using. Namely, that different Petrov type-D spherically symmetric static spacetimes can not be connected dynamically through states of the same type, we must have a non vanishing Φ 00 to be able to generate evolution from one to the other; this requirement is to be expected since qualitatively we can imagine that the feeding of a BH with matter to increase its area implies both a flux of energy-momentum for the feeding process and the emission of gravitational waves produced while the BH reaches it final state. This reasoning gives us an idea of the high level of idealization required to describe BH processes in terms of quasistatic trajectories in a space of states and, in addition, is important in the context of approaches to thermodynamic equilibrium for BHs. In the case of the relation of the EPS approach and the NP symbols, we must remark that the generalization of our identification to other spacetimes is not straightforward. In contrast with HT, where the universality of the construction allows for a definition of thermodynamics irrespective of the concrete form of the metric, the EPS formulation of BH thermodynamics depends strongly on the matter sources. This can be seen as a trade-off for a richer phase structure than the one present in the HT case. For example, in our study of AdS-RN BHs we associated the EPS pressure, defined in terms of the cosmological constant, to the Λ scalar, which is proportional to the Ricci scalar; however, in other situations could be contributions to this scalar which are not associated with the cosmological constant, as in the case of massive electrodynamics. Thus, the connection of EPS thermodynamical quantities to NP scalars proceeds on a case by case basis. In a broader setting, it is interesting to note that the character of the thermodynamical descriptions is very different in our case and in EPS. We study the behavior of certain scalars at the horizon, obtaining that local geometrical relations can be interpreted as thermodynamical relations measured by certain observers, which put us closer to the spirit of HT, and even Hayward's work [10]; on the other hand, EPS considers laws for the whole spacetime since, as reviewed before, it compares solutions with slight differences in their parameters. The horizon relations enter in this context as a change of variable from M to r + , but this should not make us to lose sight of the structure of the approach. Thus, to map this structure in terms of NP symbols, defined locally, is not easy in general terms, even as the explicit EoS can be readily connected. These issues are far more important when devising applications, since the possible interpretations of the results depend on them. One last aspect we would like to discuss is the freedom in the choice of thermodynamic variables that our results suggest. Summarizing, we have seen that there are different ways to assign a thermodynamical role to the geometric objects that we are interested in, the NP scalars and K; namely, the equations (35) and (36), together with the associations with pressures and energy densities based on the SQBR tensor, Eq. (49). With these connections in mind, one could ask if there are some additional criteria that could help to filter out some alternatives. Naturally, this is the case. One criterion is given by the same reasoning that led historically from the analogies in BH physics to the recognition of the true thermodynamical nature of BH: the existence of processes, such as Hawking radiation, that implement physically the thermodynamic roles that we assign to geometric quantities. For example, in our construction of gravitational pressures via the SQBR tensor it is interesting to study what do mean the identifications that we performed for pressure and energy density of the gravitational field in terms of Ψ 2 . One important issue with the usual thermodynamical reasoning is that the backreaction of the metric to Hawking radiation is neglected, perhaps the emitted particles have an associated pressure that is related with our definition, although this is only a conjecture that must be studied in depth. In any case, proposals that describe physically the mechanisms underlying pressures and/or energy densities could be valuable; however, the physical implementation of these ideas is framed in the problem of the interpretation of proposals for the energy of the gravitational field, such as the SQBR tensor, and as such, it remains an open problem where consensus has not been reached yet. Another criterion that could help to obtain insights for the identification of the NP scalars in terms of thermodynamic variables could be the relation with approaches that consider gravity as an emergent phenomena. For example, in the EPS approach we can consider a connection of the asymptotically AdS gravitational system with a field theory via the AdS/CFT correspondence; in this context, thermodynamical variables such as the pressure P λ have a connection with quantities of the theory. In the particular case of the pressure, it has been argued that this variable corresponds to the number of flavors in the CFT, and that thermodynamical processes where pressure varies can be mapped to renormalization group transformations in the CFT [50]. Another possibility could be a corpuscular model such as the proposal in [29]. We must recognize that we do not know yet what model could provide a microscopical basis for gravity, but, in general, if the underlying theory is able to describe processes with a thermodynamic equivalent in the spacetime under study, then it constitutes an useful guideline for a true identification. However, there is still much work to be done in this context to obtain an answer. VI. FINAL REMARKS In this work we have investigated the connections between Newman-Penrose scalars for spherically symmetric spacetimes and the equations of state for asymptotically Anti-de Sitter Reissner-Nordström black holes in the context of Horizon Thermodynamics and Extended Phase Space approaches to black hole thermodynamics. In particular, we have shown that the Penrose-Rindler K-curvature corresponds to the generalized Misner mass density associated with the areal volume of the horizon for the studied spacetime, and concluded that this is a particular case of the relation between Newman-Penrose scalars and spin-coefficients with the dual-null Hamiltonian introduced by Hayward [10]. Also, a geometric splitting is proposed for the equations of state of Horizon Thermodynamics and Extended Phase Space in terms of the non-vanishing Newman-Penrose scalars which define the K-curvature at the horizon. This result provides prescriptions for the identification of pressures among the Newman-Penrose scalars, which is not straightforward in a purely geometric approach. Finally, we arrived to conditions for the pressures or energy densities at (or defining) the horizon, that have been derived by introducing the square root of the Bel-Robinson tensor and a gravitational pressure related to it; such conditions can be thought as thermodynamic definitions of the horizons for the kind of black holes here considered. We also discussed the relation between these results and previous works in this field, identifying directions for future work and open important questions. Our results allow for a description of black hole ther-modynamics in terms of Newman-Penrose scalars, which can be readily linked with previous findings such as the dynamical laws for black holes [10], and therefore can be useful to provide robust interpretations for thermodynamic developments in the context of Horizon Thermodynamics and Extended Phase Space approaches. In addition, our horizon definitions in terms of pressures and energy densities are interesting since they can be thought of as emergent relations, which allow for a broader set of possibilities regarding models that seek to provide a microscopical foundation for gravity.
2020-04-07T01:42:47.502Z
2020-04-06T00:00:00.000
{ "year": 2020, "sha1": "0817ef03093cb0ab25050c95f25a810ab56bc126", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2004.02411", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0817ef03093cb0ab25050c95f25a810ab56bc126", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
254488856
pes2o/s2orc
v3-fos-license
The Jurassic epiphytic macrolichen Daohugouthallus reveals the oldest lichen-plant interaction in a Mesozoic forest ecosystem Summary Lichens are well known as pioneer organisms or stress-tolerant extremophiles, potentially playing a core role in the early formation of terrestrial ecosystems. Epiphytic macrolichens are known to contribute to the water- and nutrient cycles in forest ecosystem. But due to the scarcity of fossil record, the evolutionary history of epiphytic macrolichens is poorly documented. Based on new fossil of Jurassic Daohugouthallus ciliiferus, we demonstrate the hitherto oldest known macrolichen inhabited a gymnosperm branch. We applied energy dispersive X-ray spectroscopy and geometric morphometric analysis to complementarily verify lichen affinity of D. ciliiferus and quantitatively assess the potential relationships with extant lichenized lineages, providing new approaches for study of this lichen adpression fossil. Considering the results, and the inferred age of D. ciliiferus, a new family, Daohugouthallaceae, is established. This work updates current knowledge to the early evolution of epiphytic macrolichens and reveals more complex lichen-plant interactions in a Jurassic forest ecosystem. INTRODUCTION Lichens are a stable symbiosis composed of fungi and algae and/or cyanobacteria, also including a diverse microbiome. 1,2 Lichens are components of mostly terrestrial ecosystems from the polar regions to the tropics, 3 growing on all kinds of substrata, including bark, rock, leaf and soil. 4 Particularly epiphytic lichens growing on trees have been known to significantly contribute to water and nutrient cycling in forest ecosystems. 5 The lichen symbiosis is considered a crucial event in the evolution and transition of fungi from water to land, having evolved independently in different fungal lineages. 2 However, the evolutionary history of lichen-forming fungi is poorly understood, because of the sparse fossil record, and has been primarily reconstructed based on molecular dating analyses. [6][7][8][9] Although these approaches proposed a framework to illustrate how the lichen symbiosis may have evolved, fossil evidence is indispensable in testing and supplementing the current understandings especially when the earlier fossil was discovered. To date, 190 fossils have been accepted to represent genuine lichens, 2 among which 90% are amber-preserved, and only three permineralized and charcoalified fossils are over 100 My old. 2,10,11 The earliest convincing lichens are two crustose lichens from the Devonian, i.e., Cyanolichenomycites devonicus and Chlorolichenomycites salopensis (419-411 Mya), which were inferred to be saxicolous or terricolous. 10 Nevertheless, early evidence for foliose and fruticose lichens (so-called macrolichens) is particularly scarce. It has been proposed that the diversification of most modern macrolichens did not occur before the Cretaceous-Paleogene (K-Pg) boundary 65 Mya. 6,9,12 This is contrasted by the finding of the oldest Jurassic macrolichen, Daohugouthallus ciliiferus. 13 Given this lack of evidence for macrolichen fossils prior to the K-Pg boundary, the significance of the Jurassic lichen D. ciliiferus is crucial for understanding the evolutionary history of macrolichens. Because macrolichens have evolved in convergent fashion in multiple, unrelated lineages in Ascomycota and Basidiomycota, 8,14 it is vital to clarify the systematic position of D. ciliiferus. Unfortunately, diagnostic features, such as hamathecium, ascus, and ascospore structure, are not known from this fossil, which renders its exact classification challenging. Therefore, it is difficult to establish relationships between fossil and extant lichens, including when taking fossils as calibration points in molecular dating analyses. 2 The fossil material of D. ciliiferus was first described as a lichen-like organism, 15 but more convincing evidence to support its lichen affinity was only presented a decade later. 13 Herein, based on new material of D. ciliiferus, we expand the knowledge of this hitherto oldest known macrolichen, including its phorophyte relation with Jurassic gymnosperms in Mesozoic forest ecosystem. To further assess the potential significance of this fossil, we applied two new methods: (1) Energy dispersive X-ray spectroscopy (EDX), which was employed to distinguish and verify the fungal hyphae especially algal cells from the rock particles as another potential strategy of scanning electron microscopy (SEM), on the adpression fossil; and (2) geometric morphometric analysis (GMA), used to assess potential morphological relationships of D. ciliiferus with extant macrolichen lineages. GMA mainly uses landmarks and outlines for assessing the morphological structure of samples, transforming it into digital information, 16 allowing quantitative analysis of the data, 17 and avoiding problems stemming from subjective ad-hoc analysis of morphological characters. 18 GMA has become popular in the taxonomy of higher taxa including morphological variability and diversity, and even evolution of body structure. [19][20][21][22] In parallel, we updated the molecular clock analysis by Nelsen et al., 9 which because of the comprehensive sampling offers a much broader framework than other molecular clock studies including lichen formers. 7,23 As a result, a new family Daohugouthallaceae is proposed to accommodate the Jurassic macrolichen which is most similar to Parmeliaceae but remains as incertae sedis within Ascomycota class Lecanoromycetes. EDX analysis and the lichen nature of D. ciliiferus EDX is an analytical method for analytical or chemical characterization of materials, which can give a spectrum correlated with the elemental composition of the samples 24 ; it has been used to examine lichen mycobionts in amber-preserved lichens, indicating containing sodium, magnesium, silicon, potassium, calcium, and chlorine. 25 Herein, we employed EDX analysis to test whether there were the significant differences between lichen body and surrounding rock, which was treated as a potential way to determine the lichen affinity of the adpression fossil. The elements and atomic percentages in the examined samples using EDX analysis (see STAR Methods, Table S1 and Figure 1) showed distinct differences between the fossil and rock areas, containing 100% carbon (C), and 20% carbon as well as additional elements, such as oxygen (O, more than 50%), silicon (Si, 17-22%), C (15-23%) and minor potassium (K) and aluminum (Al), respectively. The considered fungal hyphae and the adhesive photobiont cells examined from the fossil are highly consistent with the corresponding components from the extant lichens, except containing minor Si (less than 3%). Moreover, the consistency is more obvious between the fossil and the chlorolichen. Therefore, although there is no much comparability between the EDX results of amber-preserved and adpression fossils, it is feasible to distinguish the adpression lichen mycobiont and photobiont from rock by the EDX, and the photobiont seems more like extant green algae. GMA on the fossil and extant macrolichens The GMA of 149 images ( Figure S1) resulted in cumulative values for all the principal components, listed in Table S2. The cumulative eigenvalues for the main axes (principal components) with the cumulative variance of the first four principal components amounting to 66.9% (Table S2), meeting the requirements for geometric morphometric analysis. Among the canonical variate analysis (CVA) for four combinations of the four principal components (individual variances 35.7, 14.1, 10.8,6.3; Figure 2), the plot combining the first two principal components (cumulative variance 49.8) showed that the fossil D. ciliiferus (group 2) appeared morphologically closest to foliose Parmeliaceae (group 3, Lecanoromycetes, Ascomycota), including the genera Hypotrachyna, Hypogymnia and two foliose Parmeliaceae fossils. 2,12 Molecular clock assessment The order-and family-level correspond to 176-194 Mya and 111-135 Mya in Lecanoromycetes as calculated. 7 We used the detailed molecular clock tree provided by Nelsen et al. 9 to illustrate inferred ages for selected family-level clades in the Lecanoromycetes that include macrolichens ( Figure 3). Most of the families have stem node ages younger than 100 Mya, a few were reconstructed as between 150 and 100 Mya, and only one family, i.e., Icmadophilaceae, with an inferred crown node age of approximately 200 Mya. D. ciliiferus, with the age of 165 My, is older than almost all the macrolichen families ( Figure S2) Figure 3A), they do not fit morphologically and/or ecologically. [27][28][29] Otherwise, Parmeliaceae is the best candidate showing the best morphological and ecological fit in GMA ( Figure 2), but that family is also rejected as home for the fossil, because of its significantly younger divergence time comparing with the fossil ( Figure 3A). Outside of Lecanoromycetes, other macrolichens but distinct in phenotype as presented by GMA results such as Arthoniomycetes (Ascomycota), Lichinomycetes (Ascomycota), and Agaricales (Basidiomycota), correspond to 289 Mya, 168 Mya, and 136 Mya, respectively. 23 If these estimates are correct, the Jurassic D. ciliiferus may be treated as a new clade at least in family level and reflect a new evolutionary scenario of early macrolichens. Thallus foliose to subfruticose, about 5 cm high, 3 cm wide ( Figures 4A and 4E); lobes slender, about 5 mm long and 0.5-1.5 mm wide, tips tapering, nearly dichotomous to irregular branching, with lateral rhizinate cilia, concolorous to thallus to black, 0.5-1.5 mm long ( Figure 4B); black spots present in some areas; lobules present ( Figure 4B); unknown disc-like structure superficial, or nearly terminal, 0.25-0.5 mm in diam., sometimes immersed ( Figure 4C). Upper cortex conglutinate, c. 1 mm thick ( Figures 5A and 5B); photobiont cells globose, simple, mostly 1.5-2.5 mm in diameter ( Figures 5I, 5G, and 5C-5F), anastomosed by or adhered to the fungal hyphae with simple wall-to-wall interface; fungal hyphae filamentous, some shriveled, septate, mostly less than 1.25 mm wide ( Figures 4I-4K and 5A-5C). (Table S4). The distance showed the degree of similarity between different groups. Verification the epiphytic nature of D. ciliiferus The new fossil material of D. ciliiferus grew on an unidentified gymnosperm branch ( Figure 4A, 4D, and 4E), providing direct evidence to consider D. ciliiferus as the oldest known epiphytic lichen. However, the fossil material does not provide further details on how D. ciliiferus attached to the branch, because those very common connections that lichens attach to the substrate, such as rhizines, a lower tomentum, or an umbilicus, were not detected, and only the habitus reconstruction of the upper surface of D. ciliiferus in relation to its microhabitat was possible ( Figure 6). DISCUSSION GMA results provided a clue for clarifying the potential affinities of Daohugouthallaceae. The CVA plots based on the comparison with homologous landmarks of 66 extant macrolichens and two Parmeliaceae fossils showed Daohugouthallaceae being most similar to foliose Parmeliaceae, but in the light of the much older age of the fossil, this similarity cannot be interpreted as convincing evidence of a close relationship, also given the absence of diagnostic characters of the ascomata. Therefore, the introduction of a new and monogeneric family for this fossil seems justified in this case. Although we could not ascertain the higher classification of Daohugouthallaceae, it seems to be more distantly related to other extant macrolichens, such as Arthoniomycetes (fruticose thallus and Trentepohlia-type photobiont), 30 Lichinomycetes (saxicolous or terricolous habitat and cyanobacterial photobiont), 31 and Agaricomycetes (mushroom-like), 32 than to Lecanoromycetes. GMA is expected to be useful in the fossil and extant lichen taxonomy when traditional characters missed, especially after iScience Article more detailed test based on more abundant extant lichen species, extracting those diagnostic characters but not including color and size and then transforming them into digital information. 16 During the study, we noticed the photobiont cells were nearly half the size (1.5-2.5 mm in diam.) and fungal hyphae were thinner (mostly less than 1.25 mm wide) in D. ciliiferus, compared to extant macrolichens. iScience Article Smaller photobiont (3-6 mm in diam.) 25 and hyphae (1.1-3.5 mm in diam.) 11 were also reported from other foliose macrolichen fossils, and Hartl et al. 25 considered the possible shrinkage to be related to the drying during fossilization. Previous studies have demonstrated that mycobiont and photobiont cultures isolated from a squamulose lichen survived up to eight and three months, respectively, under desiccation stress, 33 and the size of both algal and hyphal cells ultimate shrank by half. Thus, we hypothesize that foliose macrolichens like D. ciliiferus were more sensitive to drying or other environmental adversity, of which the photobiont and hyphae are more easily deformed. However, this hypothesis is partially contradicted by the finding that the fossil crustose lichens C. devonicus and C. salopensis (419-411 Mya) 10 had normally sized photobiont cells and hyphae. Certainly, it is also possible that the small size of the photobiont of D. ciliiferus is not an artifact, because in general, the size of extant green algae as photobiont is above 6 mm, 11 but there are some smaller coccoid green algae such as Coccomyxa (1.7-3.4 mm in diam.). 29 Coccomyxa is known as the lichen photobiont of six extant lichenized orders, i.e., Baeomycetales, Lecanorales, Peltigerales, Pertusariales, Agaricales, and Cantharellales. 34 The first four belong to Lecanoromycetes (Ascomycota) and the latter two to Agaricomycetes (Basidiomycota). Therefore, a Jurassic alga like Coccomyxa in small size as photobiont of D. ciliiferus may be also conceivable. The diversification of major macrolichen lineages after the Cretaceous-Paleogene (K-Pg) boundary was mainly concentrated within Lecanoromycetes. 2,6,12 Noticeably, the divergence time of Lecanoromycetes, was estimated at 300-250 Mya based on molecular clock analyses, 7,9,23 coinciding with the period after the end-Permian extinction. Considering the diverse Permian forests that were in existence around the world during that period, 35 this provided a potential ecological setting for the evolution of early epiphytic macrolichens; however, there are no fossils to support such an assumption. After the end-Triassic mass extinction 200 Mya, terrestrial vegetation and forest ecosystem recovered from Late Triassic onwards into the Early Jurassic. 36 This period could also have allowed the existence of epiphytic macrolichens, but again, no unambiguous fossil record exists that would support such as a hypothesis, until our new specimen of Middle Jurassic D. ciliiferus, found attached to the branch of a gymnosperm fossil. Therefore, our material shows that gymnosperms, possibly representing a conifer, served as substrate for epiphytic macrolichens already in the Jurassic. The new material of D. ciliiferus thus fills the long gap between the beginning of the Permian and the end of the Cretaceous with regard to the demonstrable existence of epiphytic macrolichens. Even so, there still remains a large temporal gap of more than 100 My between this fossil and extant macrolichens that largely diversified in angiosperm-dominated forest ecosystem. 6,8 iScience Article Extant epiphytic macrolichens are crucial components of terrestrial woody ecosystems, including gymnosperm conifer forests, 37 playing an important role in the forest water and nutrient cycling. 5 Generally, epiphytic macrolichens attach to the bark or branch by lower surface, rhizines, tomentum, or an umbilicus. However, it remains unknown how D. ciliiferus attached to the gymnosperm branch. Epiphytic macrolichen diversity can be regarded as an indicator of forest ecosystems, as there is a significant correlation between epiphytic macrolichen diversity and tree species composition. 5 The fossil record and molecular clock studies indicate that gymnosperms diverged around 315 Mya, 38 whereas conifers originated approximately 300 Mya and diversified 190-160 Mya in the Early to Middle Jurassic 39 into the various families recognized today. Therefore, macrolichens may have played a role in Jurassic gymnosperm-dominated forest ecosystems comparable to extant macrolichens in present-day forests. The presence of an epiphytic macrolichen already in the Jurassic indicates that lichens and perhaps other epiphytes may already have contributed to the ecological complexity of paleo-forest ecosystems. Further exploration of potential Mesozoic lichen fossils is needed to shed more light on this issue. Limitations of the study One limitation in our work is the sparse fossil record, with only one taxon of epiphytic macrolichen known so far from Mid-Jurassic, therefore, we cannot conclude it updated the time node of macrolichens diversification widely accepted around the K-Pg boundary (ca. 65 Mya). The other limitation is absence of some key diagnostic features in the D. ciliiferus fossil such as hamathecium, ascus, and ascospore structure, which limits the more accurate assessment on its phylogenetic position and further judgment on its relationship with the extant lichen lineages. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: iScience Article Any additional information required to reanalyze the data reported in this work is available from the lead contact upon reasonable request. METHOD DETAILS SEM and EDX examination The lichen fossils were examined and photographed using an Olympus SZX7 Stereomicroscope attached to a Mshot MD50 digital camera system. For selected fossil we made cross sections using a stonecutter, one piece was embedded in EXAKT Technovit 7200 one-component resin, then cut using an EXAKT 300CP cutting system. The thin sections were ground and polished to the thickness of about 20 mm using an EXAKT 400CS variable speed grinding system with P500 and P4000 abrasive papers; one piece of the fossil together with two pieces of extant lichen species H. cirrhata (lichen 1 = extant chlorolichen HMAS-L 8322) and P. praetextata (lichen 2 = extant cyanolichen HMAS-L 13030) were sputter-coated with gold particles using Ion Sputter EÀ1045 (HITACHI). SEM images were recorded using a scanning electron microscope (Hitachi SU8010); the above-mentioned pieces of fossil and two extant lichens were analyzed with a Zeiss MA EVO25 scanning electron microscope under a high vacuum mode by using an accelerating voltage of 20 kV. Energy Dispersive X-ray Spectroscopy (EDX/EDS) spectra were obtained with an Oxford X-act detector. The working distance was kept between 8 and 10 mm. Acquisition time was set up to 60 s for each EDS spectrum. Plates were composed in Adobe Photoshop. Most lab work was performed at the Institute of Microbiology, except the stonecutter was operated at the Institute of Geology and Geophysics, and the fossil thin sectioning and EDX were taken at the Institute of Vertebrate Paleontology and Paleoanthropology. All the above three Institutes are in Beijing and subordinate to Chinese Academy of Sciences. Geometric morphometric analysis For geometric morphometrics, 149 images ( Figure S1) of 66 representative extant macrolichen species were selected from 15 families and 9 orders in both Ascomycota and Basidiomycota (Table S3), including specimens deposited in HMAS-L, photos provided by Robert Lü cking, and pictures downloaded from the CNALH (Consortium of North American Herbaria) Image Library https://lichenportal.org/cnalh/imagelib/ and the Hypogymnia Media Gallery http://hypogymnia.myspecies.info/gallery, together with accepted Parmeliaceae fossils, 12 and D. ciliiferus fossil, among which 14 species had more than 2 samples and images, 15 species had only one sample each but more than 2 images, and 37 species had one image each, together with two images of accepted Parmeliaceae fossils, and 25 sub-images cut from the images of D. ciliiferus fossil. The sampling number of images in this study comprehensively considered the quality requirement for the geometric morphometric analysis, representativeness and availability of the discernable topology of thallus lobes or branches. The whole image set was divided into five groups according to lobes types: microfoliose group, the D. ciliiferus fossil group, a long branches group, a wide-lobed group, and a fruticose group (Table S4). The selected images were two-dimensional graphs with two views of the front or back of the thallus where the branch tips were clearly recognizable. To orient the images in the same direction, they were adjusted so that the end of the branches faced right. Images were named in a unified format: growth type-order-family-genus-species (sample number) except for the two selected reference fossil images only corresponding to family name. The external forms were represented by one curve extracted from the end of branches or lobes and the curve was resampled into 60 semi-landmarks by length ( Figure S3). The starting point of the curve was selected as a point on the upper edge of the lobe or branch near the center or substrate, and after describing the outline of the whole lobe or branch, the endpoint returning to the lower edge near the starting point. The curves and semi-landmarks were digitized using TPS-DIG 2.05. 40 To merge all semilandmarks into the same data file to produce the dataset for morphological analysis, the data file was opened as text file to convert the semi-landmarks to landmarks, by deleting the line with the curve number and point number and replacing the landmark number by the point number. 21 MORPHO J 1.06a 41 was used for subsequent analysis of the dataset. Through Procrustes analysis, the morphological data of all test features were placed in the same dimensional vector space to screen out physical factors such as size. Principal component analysis (PCA) and geometric modeling of the mathematical space formed by PC axis were used to coordinate the shape changes of the entire dataset. We then selected the dataset to generate a covariance matrix. In this context, the first two principal components corresponding to the highest cumulative variance represent the best variation pattern of test shape. The ll OPEN ACCESS iScience 26, 105770, January 20, 2023 iScience Article relationships among different morphological groups were then visualized through canonical variate analysis (CVA). Molecular clock assessment The time-calibrated maximum likelihood phylogeny of 3,373 Lecanoromycetes fungi was taken from the supplementary data provided by Nelsen et al. 9 It was edited for content and style using FigTree 1.4.4, 42 highlighting the clades including macrolichens. According to Nelsen et al., 9 the time-calibrated tree was originally constructed from a partitioned ML analysis using penalized likelihood in treePL v.
2022-12-10T16:09:52.149Z
2022-12-08T00:00:00.000
{ "year": 2022, "sha1": "918e72b86a995d8f9828c036f40f3ad80af256e7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.isci.2022.105770", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7d1ddbf095a9322fdcae6aff92174cc005784a78", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
416630
pes2o/s2orc
v3-fos-license
International Medical Case Reports Journal Dovepress Controversial Treatment of a Victim of Severe Head Injury Complicated by Septic Shock and Acute Respiratory Distress Syndrome Pneumonia, severe sepsis, and acute respiratory distress syndrome (ARDS) are frequent complications after head trauma. Recombinant human activated protein C (APC) reportedly improves circulation and respiration in severe sepsis, but is contraindicated after head injury because of increased risk of intracranial bleeding. A 21-year-old man with severe head injury after a car accident was endotracheally intubated, mechanically ventilated, and hemo-dynamically stabilized before transfer to our university hospital. His condition became complicated with pneumonia, septic shock, ARDS, coagulation dysfunction, and renal failure. In spite of intensive therapy, oxygenation and arterial blood pressure fell to critically low values. Simultaneously, his intracranial pressure peaked and his pupils dilated, displaying no reflexes to light. His antibiotic regimen was changed and ventilation was altered to high frequency oscillations , and despite being ethically problematic, we added APC to his treatment. The patient recovered with modest neurological sequelae. Introduction Victims of severe head trauma are prone to remote organ complications. 1 Neurogenic pulmonary edema (NPE), pneumonia, sepsis, diabetes insipidus, coagulation dysfunction, acute respiratory distress syndrome (ARDS), and multiple organ dysfunctions syndrome (MODS) occur most frequently. [1][2][3][4] Recombinant human activated protein C (APC), exerting anticoagulant, anti-inflammatory, anti-apoptotic, and profibrinolytic effects, reportedly improves cardiovascular and respiratory functions, and increases survival in severe sepsis. However, severe head injury within the last 3 months constitutes a contraindication against APC because of increased risk of intracranial bleeding. 5,6 The aim of this report was to describe the medical and ethical dilemmas when facing a young victim of severe head injury who was threatened by cerebral herniation as his condition became complicated with septic shock and ARDS, with lack of response to conventional therapy. A 21-year-old man was found unconscious hanging in a head down position in his overthrown car, approximately 1½ hours after the accident. He was taken by ambulance to the local hospital where he arrived with a Glasgow Coma Scale score Figure 1A), traumatic subarachnoid hemorrhage, scattered cerebral bleedings ( Figure 1B), cerebral concussion, and brain edema (not shown in the figures). He was endotracheally intubated, mechanically ventilated, and hemodynamically stabilized with intravenous infusions of norepinephrine and dopamine ( Table 1). The treatment continued uninterrupted during the transfer via air ambulance, staffed with an anesthesiologist and a nurse, to the intensive care unit (ICU) of our university hospital, where he arrived 7 hours after the accident. Here, his mean arterial pressure (MAP), intracranial pressure (ICP), cerebral perfusion pressure (CPP = MAP − ICP), and oxygenation ratio (arterial partial pressure of oxygen/fraction of inspired oxygen [PaO 2 /FiO 2 ]), all presented as averages over 24 hours, initially displayed normal values (Table 1) and no indication for neurosurgical intervention was found. Case report On hospital day 3, ICP peaked above 30 mmHg (not shown in Table 1) accompanied by pupillary dilation and diabetes insipidus that were treated with thiopental and desmopressin, respectively. Inflammation parameters increased and PaO 2 /FiO 2 ratio decreased gradually. Despite the fact that he was prophylactically treated with cefotaxime because of his basilar skull fractures, he contracted bronchopneumonia. Since he required continuous hemodynamic support with moderate to high doses of norepinephrine and dopamine to maintain MAP above 70 mmHg (Table 1), we suspected septicemia, and gentamycin, erythromycin, and hydrocortisone were added to the treatment. Discussion It is difficult to predict the outcome of a patient with severe head injury who was hanging in a head down position for a lengthy period of time before rescue. Enhanced venous pressure transiently may have increased ICP and compromised his CPP at this early stage of injury. Despite the fact that his condition was complicated with pneumonia followed by severe sepsis and ARDS, he survived with modest sequelae after receiving intensive therapy including APC. It is well documented that impaired cerebral blood flow after severe head injury leads to reduced brain tissue oxygen delivery and lactate accumulation. Reduction of cerebral blood flow, or decrease in oxygenation below a certain threshold value may escalate brain damage and eventually lead to cerebral herniation. 7 According to a recent report, brain tissue oxygen tension (PbtO 2 ) after severe head injury can be improved by pharmacologically increasing CPP. 8 Investigators also suggest that therapy based on continuous correction of PbtO 2 is associated with reduced mortality and better short-term outcome. 9 Concerning our patient, it is likely, albeit not proven, that septic shock and severe hypoxia combined might have prevented brain oxygenation from reaching the PbtO 2 threshold despite receiving vasoconstrictor support at high rates ( Table 1). Victims of severe brain trauma have increased risk of developing coagulation disturbances, partly because the brain cortex is rich in tissue factor (TF). 10 Abundantly released TF from the injured brain may induce coagulopathy reminding on disseminated intravascular coagulation. 11 Independent risk factors for coagulopathy in isolated head injuries include GCS score ,8, injury severity score .16, hypotension upon admission, cerebral edema, subarachnoid hemorrhage, and midline shift. 11 Our patient met five of these criteria. Although his coagulation disturbances most likely resulted from severe sepsis, traumatic coagulopathy and brain hypoxia may have contributed to his illness. In severe sepsis, bacterial products activate mediators that stimulate inflammation and coagulation. Transcription nuclear factor κβ (NF-κβ) and tumor necrosis factor α (TNF-α) are submit your manuscript | www.dovepress.com Dovepress Dovepress released from cells of the immune system and stimulate inducible nitric oxide synthase (iNOS) to excessive generation of NO in endothelial and vascular smooth muscle cells. NO contributes to circulatory shock and binds to superoxide anion to form peroxynitrite that causes derangements of endothelial and epithelial linings resulting in vascular leaks and decrease in pulmonary gas exchange typical of ARDS. 12,13 Inhibitors of iNOS counteract septic shock, but do not increase survival from sepsis. 14 Bacterial products also release TF from mononuclear cells which triggers the extrinsic pathway of the coagulation cascade when conjugated with activated factor VII. 15 Thus, it might be that brain trauma and bacterial products acted together to promote the coagulation disturbances in our patient. Reportedly, 71% of the patients with head injury develop pneumonia. 3 Aspiration pneumonia, NPE, and ventilatorassociated lung injury are difficult to distinguish from ARDS and possibly might have worsened his condition. 1,4,13 However, since his chest X-rays showed no evidence of pulmonary edema upon arrival and the subsequent 2 days, we considered NPE to be a less likely explanation of his lung pathologies. Application of positive pressure ventilation with PEEP might have contributed to the increase in ICP by impeding venous return. Even a relatively low PEEP of 6-8 cm H 2 O might influence CPP negatively, but with decreasing effect as lung compliance decreases. 14 However, when his oxygenation dropped to the lowest point, instead of increasing PEEP, we discussed the idea of treating him with extracorporeal membrane oxygenation which was discarded because it would require anticoagulant therapy with high doses of heparin. Therefore, we changed to high frequency oscillatory ventilation, although according to the literature, no increase in survival has been noticed in comparison with conventional mechanical ventilation. 16,17 Since all conventional treatments had failed, the only possibility remaining was to perform a decompressive craniotomy, but the idea was abandoned because his condition indicated that he would probably not survive. His parents insisted that "something had to be done!", so we could see no option other than facilitating his general circulatory and respiratory conditions with the aim to improve his cerebral oxygenation. The protein C Worldwide Evaluation in Severe Sepsis (PROWESS) trial had shown that treatment with APC resulted in a faster regression of circulatory and respiratory dysfunction, and increased survival in patients with severe sepsis. 5,6 A retrospective study of patients with septic shock confirmed that APC rapidly improved vascular tone by decreasing the norepinephrine dose required to maintain arterial pressure. 18 On the negative side, 0.47% of the patients randomized to receive APC had suspected intracranial hemorrhage during the infusion period. 19 Correspondingly, a recent meta-analysis reported the rates of intracranial hemorrhage to around 0.4% and 0.7% during infusion and at 28 days, respectively. 20 The abdominal surgery subgroup of patients in the PROWESS trial, complicating with severe sepsis, had a more than 9% reduction of the absolute risk of fatal outcome. The relative risk reduction for 28-day mortality in the abdominal surgery patients was 30% and in the high-risk patients, as defined by an Acute Physiology and Chronic Health Evaluation II score of 25 or greater, the relative risk reduction was 40%. 21 Literature research revealed no controlled studies on the efficacy and safety of APC in septicemic patients with a primary head trauma. The only hit was the case report of an alcohol-intoxicated man, who arrived with subdural hematoma, with a GCS of 14. Although his trauma was milder compared with our patient, he became critically ill with sepsis and ARDS and survived with no rebleeding after treatment including APC. 22 We decided to administer APC in spite of the ethical dilemma of prescribing a medicine which is formally contraindicated. 5,6,20 On the other hand, the fact that severe sepsis with more than five organ dysfunctions has a mortality rate of 85%-90%, presented a strong indication for APC. He undoubtedly would face cerebral herniation unless we managed to improve his cerebral oxygenation. 23 We discussed the therapeutic alternatives and their potential complications openly with his parents, who gave their consent to start with APC. Briefly, APC is an endogenously produced serine protease, catalyzed by thrombin/thrombomodulin complex when protein C is bound to its endothelial receptor. It acts by proteolytic cleavage of activated coagulation factors V and VIII and escalates fibrinolysis by inhibiting plasminogen activator inhibitor-1. 24 APC attenuates inflammation by inhibiting the translocation of NF-κβ, suppressing the release of proinflammatory cytokines and adhesion molecules, and reducing the accumulation of leukocytes in the alveoli. [25][26][27] Cleavage of the protease-activated receptor-1 by the APCendothelial cell receptor complex exerts anti-apoptotic and enhanced barrier-protective effects in endothelial cells. 28,29 In rats subjected to infusion of endotoxin, APC inhibits the induction of iNOS by decreasing TNF-α production, thereby preventing circulatory shock. 30 We speculated whether APC might counteract coagulopathy after general trauma and after severe head injury 45 Controversial treatment of a severe head injury in particular. 10,11 A prospective study of major trauma patients revealed that those with low tissue perfusion upon arrival, as indicated by a high base deficit, high thrombomodulin, and low plasma protein C levels, had increased mortality. 31 We found, however, no clinical investigation specifically focusing on the effect of APC on coagulopathy after isolated head trauma. We assume that our patient, at least transiently, suffered from low tissue perfusion, but thrombomodulin and protein C levels were not determined. As of today, information is sparse as to whether APC could be of potential benefit after traumatic brain injury. Investigators recently noticed that after standardized cortical trauma in mice, APC reduced the volume of lesions and improved the neurological outcome. They also compared wild-type APC with an APC analog with reduced anticoagulant and normal cytoprotective activity for late treatment. The APC analog displayed greater neuroprotective effect and less intracranial bleeding compared with wild-type APC. 32,33 Whether APC, which was primarily administered against septic shock, also acted beneficially on brain injury per se, remains elusive. Since his condition improved with a decrease in ICP, we believe that APC through its improvement of circulation and respiration might have contributed to his recovery that left him with modest sequelae only. However, his recovery inspires us to suggest a future controlled randomized trial of the efficacy and safety of an APC analog with reduced anticoagulant and maintained cytoprotective activity, if it will be available, in patients with MODS after severe head injury. Conclusion Although ethically controversial because of increased risk of intracranial bleeding, APC should not be rejected as part of a rescue therapy in cases of severe head injury complicating with septic shock and ARDS not responding to conventional therapy. However, as a general rule, decisions to treat should be taken based on evidence from controlled randomized trials and not from animal experiments or case reports. Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/international-medical-case-reports-journal-journal The International Medical Case Reports Journal is an international, peer-reviewed open-access journal publishing original case reports from all medical specialties. Previously unpublished medical posters are also accepted relating to any area of clinical or preclinical science. Submissions should not normally exceed 2,000 words or 4 published pages including figures, diagrams and references. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2018-05-08T18:07:09.580Z
0001-01-01T00:00:00.000
{ "year": 2011, "sha1": "4a81d36b37c114d5ba394ca72b80f41b658900de", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=10428", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a81d36b37c114d5ba394ca72b80f41b658900de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8357337
pes2o/s2orc
v3-fos-license
Influence of atrioventricular nodal reentrant tachycardia ablation on right to left inter-atrial conduction. BACKGROUND Radiofrequency (RF) catheter ablation is the procedure of choice for the potential cure of atrioventricular nodal reentrant tachycardia (AVNRT) with high success rates. We hypothesed that as a result of the close proximity of Koch's triangle and low inter-atrial septal fibers, the RF ablation applied at this region may result in prolongation of inter-atrial conduction time (IACT). METHODS RF ablation of AVNRT was performed by conventional technique. IACT was measured before and 20 minutes after RF ablation during sinus rhythm. Number of ablations given and duration of ablation were noted. RESULTS The study group was consisted of 48 patients (36 [75%] female, 12 [25%] male, mean age 43.4 +/- 14. 5 years). RF ablation was successful in all patients. Mean RF time was 4. 0 +/- 3. 3 minutes and mean number of RF was 11. 9 +/- 9, 8. The mean IACT was 70.1 +/- 9.0 ms before ablation and 84.9 +/- 12.7 ms after ablation, which demonstrated a significant prolongation (p<0.001). The prolongation of IACT was very well correlated with the number of (r=0.897, p<0.001) and duration of RF (r=0.779; p<0.001). CONCLUSIONS RF ablation of AVNRT results in prolongation of IACT. The degree of prolongation is associated with the duration and number of RF ablations given. The relationship between this conduction delay and late arrhythmogenesis need to be evaluated. Introduction Radiofrequency (RF) catheter ablation is the procedure of choice for the potential cure of atrioventricular nodal reentrant tachycardia (AVNRT) with high success rates. Slow pathway (SP) ablation is the best method for the treatment of AVNRT. It has been shown that successful atrial flutter ablation was associated with an altered sequence of left atrial activation 1 "Influence of Atrioventricular Nodal Reentrant Tachycardia Ablation on Right to Left Interatrial Conduction" fibers, the RF ablation applied at this region may result in prolongation of inter-atrial conduction time (IACT). For this purpose, we evaluated the effect of RF ablation on IACT before and after ablation of SP, during sinus rhythm. Patients The study group was consisted of 48 consecutive patients with common type AVNRT who had inducible AVNRT with dual or multiple atrioventricular node (AVN) physiology in the atrial extra-stimulations and who had undergone SP ablation between April 2004 and March 2005. Exclusion criteria were presence of clinically significant valvular, congenital or ischemic heart disease, any type of cardiomyopathy, and heart failure and patients with dilatated atria. After informed consent was obtained, the electrophysiologic study (EPS) was performed in the fasting state without any sedative medications. Antiarrhythmic drugs were discontinued for at least 5 half-lives before the ablation. Transthoracic echocardiography was performed in all patients to determine the presence of exclusion criteria. Catheters and Electrogram Recordings Three 5F quadripolar electrode catheters were introduced via the femoral vein, and were positioned against the high right atrium (HRA), the His bundle region, and the right ventricular apex (RVA) under fluoroscopic guidance. One 7F decapolar catheter (inter-electrode distance: 2mm) was introduced into the coronary sinus via the femoral vein. Patients' data were analyzed on a paper recording at 200 mm/s (Model VR-13, Biomedical Systems, USA), and data were stored for analysis on optical disks using a computer-recording system and were analyzed on screen (Cardio Lab System, Marquette, MI,USA). Basic Study and Diagnosis Single extra-stimulation and incremental pacing were performed at the HRA and RVA sites for the basic electrophysiological evaluation. Prolongation of the atria-His (AH) interval of not less than 50 ms in response to the shortening of the coupling interval of the premature atrial stimulus by 10 ms was defined as the 'jump-up' phenomenon of the AH interval. 2 When the single atrial stimulus could not induce the AH jump-up, a double-atrial-extra-stimulus protocol was performed to achieve shorter premature coupling intervals. 3 AVNRT was diagnosed in accordance with the following criteria: (1) the jump-up phenomenon of the AH interval after single or double atrial stimulus, (2) induction of narrow QRS regular tachycardia without the participation of an accessory pathway, and (3) simultaneous activation of the atrium and ventricle during tachycardia. 4 The second criterion was confirmed by single-ventricle scanning (ventricular pacing during His refractoriness in order to observe retrograde atrial activation) during tachycardia and sinus rhythm to exclude orthodromic AV reciprocal tachycardia involving an accessory pathway. When tachycardia could not be induced in the basic state, atropine was infused intravenously at a dose 1 mg to increase the basic sinus rate by 20-50%, and the same stimulation protocol was repeated. Inter-atrial Conduction Time Measurement The 5F quadripolar electrode catheter was positioned at HRA during sinus rhythm where the earliest RA activation was achieved. Right to left IACT's were measured from earliest right atrial activation to the distal coronary sinus. Left to right IACT's were measured from distal "Influence of Atrioventricular Nodal Reentrant Tachycardia Ablation on Right to Left Interatrial Conduction" coronary sinus pacing to earliest right atrial activation. Measurements were done before ablation and 20 minutes after ablation. In order to be sure for the exact position of catheter during measurements before and after ablation, we stored the image of catheter position before ablation and used this image to guide the position of the catheter for measurement after ablation. In order to exclude the autonomic effects produced by RF energy after the procedure IACT measurement was repeated following atropine. SP Ablation After the basic EPS, a 7F quadripolar steerable ablation catheter with 4 mm-tip and 2.5 mm interelectrode spacing (Marinr, Medtronic, USA) was introduced through the femoral vein. The catheter tip was initially positioned along the tricuspid annulus anterior to the ostium of the CS. In the lowest one-third of the area between the recording site at the His bundle and the ostium of the coronary sinus, the optimal ablation site was determined under guidance of the SP potential as described by Jackman et al 5 with an A/V ratio of 0.1 to 0.5. Radiofrequency energy was delivered with a temperature controlled ablation unit (Atakr RF Ablation system, Medtronic) at 60°C during sinus rhythm. If a junctional beat was recognized within 10s, the energy delivery was continued for 1 min, and was terminated immediately in the cases showing impedance rise or any signs of AH block. After each ablation procedure, the pacing protocol was repeated to evaluate the inducibility of AVNRT. When AVNRT was still inducible, the ablation catheter was repositioned to a more superior region along the tricuspid annulus and the ablation procedure was continued. The ablation procedure was considered to be successful when AVNRT could not be induced 20 min after the last delivery of radiofrequency energy. In the post-ablation study, atropine was infused only when it had been needed for AVNRT induction in the basic state before the ablation. Single atrial echo beat or jump-up of the AH interval was allowed to remain. Statistical analysis All conduction times were given in milliseconds and expressed as mean ± Standard deviation. Statistical analyses of data were performed using Student's t test for paired data. Spearman's correlation analyses were used to evaluate correlation between parameters. All analyses were performed by using SPSS 10.0 computer programme.A p value of < 0.05 was considered as statistically significant. Results The study group was consisted of 48 patients (36 [75 %] female, 12 [25 %] male, mean age 43. 4 ± 14 and 5 years). RF ablation was successful in all patients. We observed no major complications. Mean RF time was 4. 0 ± 3.3 minutes and mean number of RF was 11. 9 ± 9.8 including very short RF's to confirm nodal beat formation. Tachycardia was initiated by atrial extra stimulus in 43 patients and by atropine infusion in 5 patients. Post ablation cycle length was 685.8 ± 81.0 ms. The sinus P wave morphology on 12-lead surface ECG was unchanged after ablation. The mean IACT was 70.1 ± 9. 0 ms before ablation and 84.9 ± 12.7 ms after ablation, which demonstrated a significant prolongation (p<0.001). Left to right interatrial conduction time during pacing of distal coronary sinus was unchanged before and after ablation. The prolongation of IACT was very well correlated with the number of RF (r=0.897, p<0.001) and duration of RF (r=0,779; p<0. 0 01) (Figure 6-7). After ablation no IACT change was observed in repeated measurements in which atropine was given (the mean IACT was 84,9±12.7 ms after ablation and 83.4 ± 10.1 ms after atropine, p=0.884). Journal (ISSN 0972-6292 Discussion The atrioventricular (AV) junction is a complex anatomic structure located within an area called Koch's triangle 6,7 . Koch's triangle is bounded anteriorly and superiorly by the tendon of Todaro, posteriorly by the coronary sinus, and inferiorly by the annulus fibrosus of the tricuspid ring. The base of the triangle is marked by the ostium of the coronary sinus. SP conduction is registered near the ostium of the coronary sinus. In adults, the compact AV node is relatively uniform in size, with a length of 5 to 7 mm and a width of 2 to 5 mm 8 . A greater variability in the size of Koch's triangle was observed in the intraoperative and postmortem studies 9,10 . Fluoroscopic measurement with coronary sinus angiography also found a marked variation in the triangle's dimensions 11 . In the right oblique (RAO) view, the distance between the His potential recording site and the floor of coronary sinus ostium was 25.9 + 7.9 mm. Marked differences in the arrangement of the superficial atrial muscle fibers in the area of the triangle of Koch have been reported in normal hearts 12 . Systematic anatomic investigation of the AV node in patients with AVNRT is lacking 13 . In the RAO view, the posteromedial tricuspid annulus between the level of the coronary sinus ostium and the His potential recording site was divided anatomically into posterior, median, and anterior zones. Energy was delivered along the tricuspid annulus, starting at the most posterior site, the floor of the coronary sinus ostium, and progressing to the most anterior site, just inferior to the His potential recording site. The inducibility of the AVNRT was assessed after each likely successful application. If the tachycardia was still inducible after two radiofrequency energy applications within each of the anatomic zones, the process was repeated. With this approach, the SP was successfully ablated in 188 (97%) of 193 patients 14 . Activation fossa ovalis, and (c) at the region of the central fibrous trigone at the apex of triangle of Koch. Activation of the left atrium over Bachman's bundle can be observed in 50% -70% of patients 15 . During sinus rhythm, two wave fronts depolarize the left atrium: one antero-lateral emerging from the Bachman's bundle and the anterior aspect of the fossa ovalis, and one posterior proceeding from the low inter atrial septum 16,17 . But, Bachman's bundle can not be differentiated from the circular fibers of the anterior wall or it is not prominent in some subjects 18 . As a result, the impulse does not move properly across the anterior wall from right to left atrium. Besides, Bachman's bundle is not the only pathway connecting right and left atriums 18,19 . Although Bachmann's bundle appeared to be the predominant inter-atrial connection, technical limitations may have reduced the accuracy of mapping in the posteroseptal LA and the region of the right inferior pulmonary vein ostium 20 . Markides et al. have described characteristic preferential activation patterns in the human LA. They have showed that, posterior inter-atrial connections, as described in recent studies of human atrial anatomy, were found to be at least as important as Bachmann's bundle in right-toleft interatrial conduction during SR 21 . As a result, Markides et al. have shown that there are indeed multiple connections capable of right-to-left atrial conduction and that posterior communications play a major role, in contrast to left-to-right conduction. They also reported that the earliest endocardial breakthrough during sinus rhythm (SR) occurred more frequently in the septal (63%, principally posteroseptal) than anterosuperior (37%) LA and varied little with isoproterenol or high right atrial pacing rate 21 . Findings of our study suggest that, multiple RF ablations at the low interatrial septum cause prolongation of IACT by affecting the posterior inter-atrial fibers. Our observation that right to left IACT prolonged after ablation but the left to right IACT remained constant supports the proposal of Markides et al. This can be explained by the newly understood importance of the posterior inter-atrial fibers in the activation of left atrium and the close proximity between low inter-atrial septum and Koch's triangle. Clinical Implications The present results may have clinical implications regarding atrial arrhythmogenesis. In this study; postablation inter-atrial conduction time increased significantly following RF ablation of the low inter-atrial septum. The role of this augmented conduction delay in the occurrence of late atrial arrhythmias needs to be evaluated in prospective studies. Study Limitations The major technical difficulty encountered in the present study was the inability to achieve extensive mapping of the anterior and antero-septal aspects of the mitral annulus. For ethical reasons, the authors did not consider trans-septal or retrograde arterial approach in this study. Furthermore, left atrial activation was mostly represented by coronary sinus electrograms that provided information on the region of the left atrium adjacent to the mitral annulus. Given the complexity of septal activation pattern, simultaneous multisite mapping techniques (noncontact mapping of the CARTO system) are required for more accurate study of the interatrial electrical connections 22 . Conclusion RF ablation of AVNRT results in prolongation of IACT. The degree of prolongation is associated with the duration and number of RF ablations given. The relationship between this conduction delay and late arrhythmogenesis need to be evaluated.
2014-10-01T00:00:00.000Z
2005-10-01T00:00:00.000
{ "year": 2005, "sha1": "5cb90623785e1ecef383b41ffc513b2bb9ddfaac", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5cb90623785e1ecef383b41ffc513b2bb9ddfaac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
111078536
pes2o/s2orc
v3-fos-license
Hybrid simulation: an active power filter case study — The hybrid simulation concept consisting of a combination of computer simulation and laboratory tests. This approach is a cost effective alternative to physically testing the whole system and allows better understanding of complex coupled systems. . This paper describes implementing an active power filter (APF) hybrid prototype where the source system and load are implemented as a real-time simulation and the system of static power converter acting as an active power filter is implemented in physical hardware. It also confirmed the hybrid simulation results by implementing the simulation in MATLAB-Simulink regarding the same system implemented during the active power filter analysis and design stage. Hybrid simulation: an active power filter case study Simulación híbrida: caso de estudio en un filtro activo de potencia Abstract-The hybrid simulation concept consisting of a combination of computer simulation and laboratory tests.This approach is a cost effective alternative to physically testing the whole system and allows better understanding of complex coupled systems. .This paper describes implementing an active power filter (APF) hybrid prototype where the source system and load are implemented as a real-time simulation and the system of static power converter acting as an active power filter is implemented in physical hardware.It also confirmed the hybrid simulation results by implementing the simulation in MATLAB-Simulink regarding the same system implemented during the active power filter analysis and design stage.Index terms: active power filter, hybrid simulation, power inverter. INTRODUCTION Traditionally, numerical simulation and physical tests have been performed separately and their results then validated.Hybrid simulation is a multidisciplinary technology which is heavily based on mechanical and computational dynamics, control theory, computer science and numerical methods.It currently has applications in the aerospace industry and civil, mechanical and automotive engineering (Saouma and Sivaselvan, 2008). A search of the literature using the term "hybrid simulation" revealed that this has been in use for quite some time in various areas of knowledge, such as computer science (Donzelli and Lazeolla), animation and computer graphics (García, 2004;Sifakis et al, 2007), robotics, control theory (Álvarez et al, 2009), bioinformatics, chemistry (Kalstein et al), materials engineering (Foit, 2010) and civil engineering Chen and Ricles, 2009;Kausel, 1998a;Kausel, 1998b;Muñiz et al, 2002), where the common element in all applications appears to lie in the combination, coordination and synchronisation of discrete event simulations on a computer with external processes regarding continuous analogue signals. In the area of logistics information systems in the data processing technologies category, hybrid simulation is defined as an approach combining different types of simulations, typically in a distributed environment, usually involving combined, simulators with real operational equipment, prototypes of future systems and realistic representations of operational environments.The McGraw-Hill Science & Technology Dictionary defines the term as the use of hybrid computer simulation, understanding a hybrid computer as that computer designed to handle both analogue and digital data.This paper is organized as follows.Section 2 describes the definition of active power filter.Section 3, presents the proposed hybrid simulation applied in the APFs design stage.In Section 4 practical applications of the hybrid simulation are presented.Finally, in Section 5 the most important conclusions are presented. ACTIVE POWER FILTER It is very important for power generation companies to prevent current harmonics (which create electromagnetic interference and resonance problems) and to limit the flow of reactants (which generate transmission loss).Traditionally, Y.A. Garcés 1 , E.A. Cano-Plata 2 , A.J. Ustariz-Farfan 3 c passive filters, consisting of tuned LC elements and capacitor banks, were used to filter the harmonics and compensate reactive power generated by nonlinear loads.However, such methods have many disadvantages in practical applications. There has been considerable progress in the field of APF during the last two decades, with different topologies and control techniques having been proposed for its implementation.APFs are superior to passive filters in terms of filtering capacity and improve system stability by removing the problems related to resonance capacitor banks. APFs are static power converters controlling the generation of currents or voltages to compensate for undesirable power system components; the best known and applied topologies are currently (Fuchs and Masoum, 2008): Single phase active filters. • Three-phase three-wire active filters. • Three-phase four-wire active filters. Also, according to their capacity, they can also be grouped as follows: • Low-power active filters: having less than 100kVA power range and 10µs and 10ms response times.They are mainly used in residential areas, small and medium industries, commercial buildings and hospitals.These applications require sophisticated dynamic filtering techniques, and include single and three phase systems. • Medium-power active filters: having 100kVA to 10MVA power ratings and 100ms to 1s response times.They are mainly associated with medium-to highvoltage distribution systems and high-power, highvoltage drive systems where the effect of phase imbalance is negligible.Due to economic concerns and problems associated with high-voltage systems (isolation, series or parallel connections of switches, etc.), these filters are usually designed to perform harmonic cancellation, and reactive-power compensation is not included in their control algorithms. • High-power active filters: having power ratings above 10MVA and response times in tens of seconds, are mainly associated with power transmission grids, ultrahigh-power DC drives, and HVDC. APFs can be combined with passive filters to form a variety of topologies called hybrid power filters. A. Controlling active power filters Filter configuration is the initial state of any active filtering, depending on the nature of the distortion to be dealt with, the system's structure and the required accuracy and compensation speed.Possible responses that may require compensation are: • Harmonic distortion: is the change in the waveform of the supply voltage from the ideal sinusoidal waveform.Besides, is listed as APFs' primary function. • Active and reactive instantaneous power: concerning all the portions of power in the phases that do not contribute to instantaneous active power flow between the source and load. • Components from negative and zero sequence: which refer to the imbalance and neutral current in the electrical networks. • Flicker: is an irregular low frequency modulation that presented in the voltage source. • Voltage sags and swells: are short-duration decreases or increases in steady-state voltage, generally caused by the connection of significant loads or capacitor banks over-compensating the system. B. Shunt active power filter Figure 1 shows the basic layout of a shunt active filter with injection current control, taking the sample current and load voltage and generating the reference currents to be injected into the system and thus draw a sinusoidal current deep at source. Figure 1 shows the main APF system components described as follows: • Transducers: are used in various parts of the system; initially, current and voltage are measured at the load side to be admitted to the control block (v L e i L ) and generates the reference currents (i F ).The injected current compensation system is also measured for the inverter to close the control loop.Voltage and current can be measured using Hall effect transducers. • Power inverter: is a controlled static power converter with a corresponding coupling inductance which is responsible for reproducing the waveform with proper amplitude for filtration; • Inverter control: is usually configured as pulse width modulator (PWM) with a local control loop current to ensure that the current generated by the filter with an acceptable error is the reference current generated by measuring load current and voltage; and • DC bus: consisting of an energy storage element which provides the instantaneous power demanded by the power inverter. HYBRID SIMULATION: APFS DESIGN APPLICATION This section presents the proposed hybrid simulation applied in the APFs design stage (Garcés, 2011).Here, a source−load system was simulated (MATLAB-Simulink-ControlDesk) while the inverter system was implemented in external hardware (dSPACE-Inverter).The hybrid simulation allows developers to accurately and efficiently simulate electrical power systems and their ideas to improve them.The hybrid simulation operates in real time, therefore not only allowing the simulation of the power system, but also making it possible to test physical equipment (an APF based VSI bridge, in this case).This gives developers the means to prove their control strategy, prototypes and final products in a realistic environment. Figure 2 shows the hybrid simulation algorithm scheme.Here, various groups are highlighted in boxes in order to clearly describe that: • Block-1: This group in the algorithm was the source−load system in real-time simulation, recognising the three phase source, the measurement items load side and source side, the nonlinear load consisting of a three phase diode bridge and a resistor. • Block-2: This group was part of the ControlDesk software package integrated with Matlab.This block was responsible for generating the PWM pulses to control the external inverter.This block also contained the control algorithm for calculating the reference currents and the on-off control. • Block-3: The dSPACE control analogue inputs of the board were responsible for acquiring the current signal generated by the power inverter to be injected back to the simulated source−load system. • Block-4: The dSPACE control board analogue outputs for external monitors the voltage, current and power variables. COMPARISONS WITH THE CONVENTIONAL SIMULATION To establish a connection with the conventional simulation, in this section, a three-phase ideal voltage source that feed a rectifier with resistive load at the DC side is compensated with an APF.For comparison proposes, done using both conventional and hybrid simulations under same source-load conditions. In both cases, the estimate of the reference current for the control of the APF is given by the instantaneous-time or the average-time compensation strategies (instantaneous reactive power -IRP and perfect harmonic cancellation -PHC methods (Montero et al, 2007;Ustariz et al, 2010)). A. Conventional simulation results To better understand the meaning of hybrid simulation, initially the active filter is modelled as an ideal IGBT power inverter bridge.Figure 3 shows the implemented circuit in MATLAB-Simulink.Here, the emphasized signal corresponds to the a-phase.Figure 4(a) shows the simulation results for the case where the APF is not connected.Figure 4(b) shows the simulation results for the case where the APF is controlled with the -IRP strategy.Figure 4(c) shows the simulation results for the case where the APF is controlled with the -HPC strategy.Here, small transients appear in the moments of switching the bridge inverter.Figure 5 shows the instantaneous active and reactive power on the source side for the simulated cases different.Figure 5(a) shows the simulation results for the case where the APF is not connected.Figure 5(b) shows the simulation results for the case where the APF is controlled with the -IRP strategy.Figure 5(c) shows the simulation results for the case where the APF is controlled with the -HPC strategy. B. Hybrid simulation results Now, the same source-load-filter system shown above is implemented using the proposed hybrid simulation. Figure 6 shows the real time control board management programme showing load and source currents on the left-hand side, the active and reactive power at the top centre, and compensation currents generated by both control algorithms such as that generated by the inverter in the central lower part, as well as on and off controls for APF operation.Figure 7 displays the a-phase current waveforms and spectrum on the source side for the experimental prototype hybrid implemented.Figure 7(a) shows the results for the system without compensation where high harmonic content generated by the nonlinear load was noted.Figure 7(b) shows that current waveform harmonic content decreased after compensation with the -IRP strategy, although not being satisfactory.This was because this compensation strategy sought to reduce reactive and non-cancelation harmonics.Figure 7(c) shows that the -HPC strategy was the most responsive in terms of individual and total harmonic distortion current. C. Results analysis The results of harmonic and current harmonic distortion for each compensation strategy are summarized in Table 1.The results shown in Table 1 are based on conventional and hybrid simulations.Here, can be clearly seen that the current signal is corrected for each strategy and a quasi-sinusoidal wave substitutes the original waveform when the filter is connected.The small ripple in the signal is due to the strategy of modulation and not due to the calculation of the current reference.Besides, from the results shown in Table 1, as expected, the hybrid simulations is closer to the true behavior of the inverter bridge than that the based one on the conventional simulation. CONCLUSION In this paper, an active power filter has been implemented with the hybrid simulation that has been suggested.The proposed tool is a first approach to the design of active power filters.The show results in this paper allow comparing the hybrid simulation with a fully simulated system in a computer. This comparison showing that the hybrid simulation was a good option for dealing with hardware implementation issues in testing and laboratory prototypes. Figure 1 : Figure 1: Basic diagram of a shunt active power filter Figure 3 : Figure 3: Conventional simulation algorithm scheme Figure 4 displays the current waveforms and a-phase current spectrum on the source side for the simulated cases different.Here, the emphasized signal corresponds to the a-phase.Figure4(a) shows the simulation results for the case where the APF is not connected.Figure4(b) shows the simulation results for the case where the APF is controlled with the -IRP strategy.Figure4(c) shows the simulation results for the case where the APF is controlled with the -HPC strategy.Here, small transients appear in the moments of switching the bridge inverter. Figure 4 : Figure 4: Current waveforms and spectrum in the source side -conventional simulation results: a) without APF, b) -IRP strategy and c) -PHC strategy Figure 5 : Figure 5: Instantaneous active and reactive power in the source sideconventional simulation results: a) without APF, b) -IRP strategy and c) -PHC strategy Figure 6 : Figure 6: ControlDesk -hybrid simulation interface The experimental results captured with the Fluke oscilloscope and ControlDesk interface are presented in Figures 7 and 8 respectively. Figure 7 : Figure 7: Current waveforms and spectrum in the source side -hybrid simulation results: a) without APF, b) -IRP strategy and c) -PHC strategy Figure 8 shows the instantaneous active and reactive power on the source side for the experimental prototype hybrid implemented.Figure 8(a) shows the experimental results when Figure 8 : Figure 8: Instantaneous active and reactive power in the source side -hybrid simulation results: a) without APF, b) -IRP strategy and c) -PHC strategy Table 1 : Summary of harmonic and current harmonic distortion
2018-12-16T04:22:21.153Z
2011-06-01T00:00:00.000
{ "year": 2011, "sha1": "06065b46570a01df21868ca4729ac06abb80f1aa", "oa_license": "CCBY", "oa_url": "https://revistas.unal.edu.co/index.php/ingeinv/article/download/25215/25712", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "06065b46570a01df21868ca4729ac06abb80f1aa", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
978901
pes2o/s2orc
v3-fos-license
Multiple Description Wavelet Coding of Layered Video Using Optimal Redundancy Allocation We present a wavelet-based framework for the encoding of video in multiple descriptions. Using the proposed methodology, the generation of multiple descriptions is performed so that drift is eliminated at the decoder regardless of the number of received descriptions. Moreover, the proposed framework is flexible in the sense that it allows the encoding of video into an arbitrary number of descriptions. We also present a thorough analysis of rate allocation issues and propose three algorithms for the optimal allocation of redundancy. Experimental results for the transmission of video using two descriptions demonstrate the e ffi ciency of the proposed method. INTRODUCTION Multiple description (MD) coding [1,2] offers an attractive framework for the transmission of multimedia over heterogeneous networks.In MD coding, a source is encoded into multiple independently decodable bitstreams which are mutually refining and equally important.At the decoder side, the reconstruction quality is dependent on the number of descriptions that was errorlessly received.Due to its flexibility, multiple description coding is considered a very robust and reliable tool for information transmission. Multiple description coding has been investigated for image [3][4][5] and video transmission [6][7][8][9][10][11].In the particular case of video transmission, the study of MD systems becomes more complicated due to the uncertainty about the information that will be available at the decoder of an MD system. In [12], a methodology was presented for the design of two-channel orthonormal filter banks based on the Lagrangian optimization of the redundancy rate-distortion performance of MD subband coding.In [7], an MD predictive quantization system was introduced, appropriate for the encoding of correlated information sources such as video and speech.The proposed system was used to construct a balanced twin-description interframe MD video coder, and performance results are presented using two packetization strategies.A review on MD coding was recently presented in [13]. In [6], MD video coders were proposed which use motion-compensated prediction.These systems utilize MD transform coding, three separate prediction paths, and side information in order to accommodate all possible scenarios at the decoder.For this reason, three different algorithms for redundancy allocation were implemented, and experimental results were presented.An improved algorithm based on the same principles was presented in [10] where the encoding of the side information was modified in order to be useful even if no drift occurs.In [14], a novel scheme for doubledescription coding was proposed, which is built in the H.263 coder and replicates some selected DCT coefficients in both descriptions.The selection is based on a threshold determined using rate-distortion techniques.In [8], a novel way to deal with redundancy was devised.Temporal redundancy was used to control the tradeoff between drift and redundancy.However, this method does not inherently eliminate drift, that is, the cumulative distortion which occurs whenever the reference frames used at the decoder are not identical to the ones used by the encoder. In [9], a drift-free wavelet-based MDC video coding scheme was proposed.However, the redundancy allocation algorithm did not take into consideration the impact of the temporal redundancy into the design of the system, thus resulting in suboptimal coding.The above problem was dealt with in [15], where an improved version of the method in [9] was presented. In [16], a multiple description coding method for video streaming was presented.The method in [16] was based on a 3D discrete wavelet transform.Redundancy was allocated by applying Lagrangian optimization techniques for the appropriate selection of subband quantizers.In [17], an MDC scheme for video coding was presented based on a spatiotemporal multiresolution analysis.Correlation between the two descriptions was introduced in the temporal domain by using an oversampled motion-compensated filter bank. In the present paper, the intraframe and the motion compensated prediction residual frames are wavelet-coded and divided into a redundant and an enhancement part with the redundant part encoded in all descriptions and the enhancement part distributed in several descriptions.The "repeat or split" strategy was chosen over other proposed techniques, such as that presented in [2] since, in our case, drift-free reconstruction is straightforward.Using the above framework, we present and evaluate two techniques for the multiple description coding of video sequences. (i) In the first technique, only the redundant part is used for the construction of reference frames and thus the resulting video coding scheme is able to perform drift-free reconstruction.Since the quality of the reference frame affects the coding efficiency of the system, an algorithm incorporating the impact of temporal correlation is also presented for the allocation of redundancy among multiple descriptions. (ii) In the second technique, both the redundant and the nonredundant parts of the stream are used for the creation of the reference frame.This technique uses high-quality reference frames but the reconstructed video suffers from drift in case of transmission over channels with severe loss. Additionally, in the present paper the problem of optimal redundancy allocation, that is, the appropriate selection of the redundant and the enhancement parts for each frame, is investigated.Specifically, this problem is formulated as the maximization of the average video quality under the constraint of a target total rate.Three variations of an optimization algorithm are proposed and evaluated in terms of their complexity.It should be noted here that, in our system, the compression and the optimization steps are distinct.In this manner, our redundancy allocation algorithm is applied directly to compressed source layers, that is, the algorithm actually parses the compressed stream to multiple descriptions.This clearly differentiates our algorithm from the method in [16] in which the generation of descriptions is performed by application of appropriate quantizers to the transform coefficients. The structure of the paper is as follows.In Section 2, the proposed framework for multiple description coding of video is presented.Section 3 describes the wavelet coding of intraframes and motion compensation residuals.In Section 4, the exploitation of temporal correlation during the optimization process is discussed.In Section 5, the redundancy allocation problem is formulated.The complexity of the redundancy allocation algorithm is studied in Section 6, and a faster algorithm is presented in Section 7 based on the Equivalent Continuous Problem.In Section 8, experimental results are presented and finally conclusions are drawn in Section 9. PROPOSED FRAMEWORK FOR MULTIPLE DESCRIPTION GENERATION The proposed system for the generation of multiple descriptions is depicted in Figures 1 and 2. Initially, the available bit budget is evenly allocated to the frames in a group of pictures (GOP).The first frame in each GOP is intra-coded using block-based wavelet coding.The resulting coded stream is distributed over a number of descriptions.A portion of the bitstream is redundant in all descriptions.The correlation between consecutive frames is subsequently removed using overlapped block motion compensation (OBMC) [18].The reference frames used to calculate motion vectors are the original frames in order to ensure good precision in the estimation of the motion vectors.Motion vectors are losslessly coded using the techniques in [19] and are included in all descriptions. Using the previously estimated half-pixel accurate motion vectors, the procedure for the generation of multiple descriptions for the interframes continues as follows: initially, the first interframe is compensated.No intra-coding is used in interframes.We employ two different mechanisms for the derivation of reference frames that are used during motion compensation.In the first, a version of the I-frame, reconstructed using only the redundant part of the bitstream so far coded, is used as reference for the compensation process.In the second, both redundant and nonredundant parts are used for the derivation of reference frames in motion compensation.The prediction error is derived by subtracting the compensated prediction from the original interframe.The prediction error is wavelet transformed and coded into multiple descriptions.A version of the error frame is reconstructed using either the redundant part or both redundant and nonredundant information of the coded bitstream depending on which of the two mechanisms described above is used.The reconstructed error frame is added to the compensated frame.The resulting interframe (instead of the original) will serve as the reference frame for the compensation of the next interframe.The same procedure is iterated until all frames in a GOP are treated. Using the above methodology, the proposed multiple description video coding scheme is able to produce an arbitrary number of descriptions at the cost of reduced compression efficiency whenever the number of descriptions is large.In each description, there is a redundant part, which is always used for the derivation of the reference frame in the motion compensation process, and a complementary refinement part, which is used to improve the quality of each description and may or may not be used for the derivation of the reference frame.When both redundant and nonredundant information is used, reference frames of high quality are available.When only the redundant part is used, the motion compensation process performed at the encoder can be identically replicated at the decoder even if only one description is received.This is a very important feature of our coder since, if the decoder is unable to use the same reference frames, errors will accumulate in the decoded video sequence causing the aforementioned drift distortion [20].With the proposed methodology, which relies only on the redundant part for motion compensation, the possibility of facing drift at the decoder is eliminated and thus a reconstructed sequence of high quality is obtained even if only some (or even a single) descriptions are received.The determination of the portion of the bitstream that is redundant in all descriptions is performed after the wavelet coding of the intra and the residual error frames.The wavelet coefficients are coded using a simple bitplane encoder, based on the context models in [21].Specifically, the decomposed frame is divided into blocks of equal dimensions.Each block may be included in some or all descriptions.Thus, some blocks may appear in all descriptions whereas some other blocks appear in only one of the descriptions.The inclusion of blocks in one or more descriptions is done so as to maximize the average quality at the decoder, subject to a total rate constraint, and attain fairly equal bitrate and fairly equal quality descriptions.Such an assignment is depicted in Figure 3(a).A representation of the redundant and nonredundant part of the coded bitstream for a two-description system is shown in Figure 3(b).The generation of descriptions can be achieved by including appropriate blocks of wavelet coefficients in one or both of the descriptions.In the case of two descriptions, this is achieved by using the checkerboard pattern which we originally proposed in [9].This approach bears some resemblance with the flexible macroblock ordering (FMO) approach in H.264 (see, e.g., [22]).However, there are fundamental differences between FMO and our approach which arise from the fact that our method operates in the wavelet domain whereas FMO is applied in the spatial domain.Since the FMO approach uses spatial blocks, the loss of a block would mean complete loss of information for that spatial region.This is why in FMO at least a coarsely quantized version of a chess-block need be included in each description.Clearly, this means that using FMO there is much less control over redundancy since information about all blocks need be encoded in both descriptions.Moreover, since redundancy is introduced by the use of different quantizers, and not by explicitly including the same portion of the bitstream in all descriptions, the elimination of drift is not a trivial task.Finally, in FMO there is a need for error concealment in case the reconstructed quality in a spatial region is not good.Unlike the FMO approach, in our system, a loss of a wavelet block (due to the loss of the description in which the block is encoded) causes only the loss of some detail in the reconstructed frame.Moreover, in our method, most wavelet blocks are included in only one of the descriptions and only a few important blocks are included in both descriptions.This is possible since the wavelet transform compacts the important information in a few blocks (subbands) of transform coefficients.This strategy seems to be naturally more suitable for MD coding since it allows better manipulation of redundancy and generally achieves lower redundancy levels. Throughout our manuscript we assume that no B-frames are encoded (see Figure 4).However, this assumption does not affect the significance of our work, which can also be applied when using B-frames.Suppose that we have an intra-coded frame, several (unidirectionally predicted) interframes, and some other frames that are to be bidirectionally predicted using the intra-and interframes.Apparently, our MD generation methodology is directly applicable to the sequence of intra-and interframes.In each description, bidirectionally predicted frames could be encoded based on the reconstructions of intra-and interframes which are achieved using the bitstream in the same description.Note that, since B-frames do not propagate errors and do not cause drift, the reconstructed versions of intra-and interframes can be obtained using not only the redundant part of the description but also using the nonredundant part as well.An interesting and desirable result of this strategy is that, as these reconstructions will be different in the two descriptions, the associated residuals of the bidirectionally predicted frames will be inherently different in the two descriptions.This is perfectly consistent with the MD coding principle of encoding different versions of the information in each description. In the ensuing section, the complete wavelet coding method, used for both intra-and interframes, is described. BLOCK-BASED WAVELET CODING OF MOTION COMPENSATION RESIDUALS The intra-frame and the motion-compensated residuals are decomposed using a wavelet transform based on the 9-7 biorthogonal filter bank [23].The maximum absolute coefficient in each subband is placed in the image header.All subband maxima are arithmetically encoded.The transmission of information takes place in a bitplane-wise manner starting from the most significant bit (MSB) to the least significant bit (LSB).Within each bitplane, subbands are encoded in a predefined scanning order from the lowest to the highest resolution. Each subband is divided into a set of blocks.The default block size is (W/2 L+1 ) × (H/2 L+1 ), where W, H are the width and height of the frame, respectively, and L is the maximum level of the wavelet decomposition.For each block, first the coefficients whose most significant bit is on the bitplane currently coded are identified by comparison to a threshold T = 2 n , where n is the index of the bitplane that is being coded.If a coefficient becomes significant, that is, it is found to be greater than or equal to T for the first time, then its sign is coded.This process is often called significance identification [24] and the compressed significance map for a block is termed significance layer.Similarly, the refinement layer is defined as the one containing the nth bitplane of coefficients (in a block) found significant in previous passes.In our coder, refinement layers for the nth bitplane are transmitted immediately after the transmission of significance layers for the same bitplane.Note that each layer contains significant or refinement information for a single block and that the even-tual allocation of layers in descriptions is performed by taking into consideration the fact that the decoding of a layer is possible only when all its predecessor layers in the same block are also included in the description. The nth bit in the binary representation of a coefficient f in subband B is coded if the maximum coefficient in the subband B is greater than or equal to the current threshold ( The deployment of the above rule reduces drastically the number of coefficients whose significance is tested during the coding of a significance identification layer.For this reason, subband maxima are included in all descriptions.However, in order to further reduce the number of symbols that have to be coded during the layer coding stage, a single bit is initially coded to indicate whether all coefficients in a block are insignificant.A value of "1" of this bit indicates that the block contains no significant coefficients and no further information is coded for this block. The symbol streams described above are coded using adaptive arithmetic codes [25].The context modelling strategy in [21] is followed for the coding of significance identification layers.Refinement bits are entropy coded using a single adaptive arithmetic model.The max frequency count of the arithmetic coder was set equal to 512 in order to allow fast adaptation of the coder to the statistics of the incoming symbol stream. In order to apply an efficient redundancy allocation algorithm that takes into account the actual rate-distortion characteristics of the compressed stream, the distortion decrease achieved by the transmission of each bitplane should be calculated [21,26] for each layer.The distortion decrease caused by the transmission of the ith layer is given by where n is the index of the bitplane included in the layer, t is the coefficient index, and c, c denote the original and the reconstructed wavelet coefficients, respectively.Each layer corresponding to a specific block of wavelet coefficients cause different reduction in the distortion.Analytical expressions for the distortion reduction caused by the transmission of layers can be found in [26].Let R i be the number of bits required for the coding of the ith layer.When all pairs (D i , R i ) are determined, the redundancy allocation algorithm can be applied.This is examined in the following sections. TEMPORAL CORRELATION COMPUTATION An optimization algorithm should take into consideration the temporal correlation linking adjacent video frames. Modelling the dependency of adjacent frames in a video sequence is a nontrivial problem.In this paper, in order to deal with this issue, we introduce a temporal correlation coefficient a i , 0 ≤ a i < 1, meant to incorporate the effect of temporal correlation of layer i into the optimization algorithm.Specifically, we assume (a similar conclusion was EURASIP Journal on Applied Signal Processing drawn in [27]) that the distortion reduction in frame m + 1 is a i D i , where m is the frame index.In the same manner, the additional distortion reduction a i D i in frame m + 1 stimulates additional distortion reduction a j (a i D i ) in frame m + 2, a k (a j (a i D i )) in frame m+3 and so on, where a j , a k , . . .are the temporal correlation coefficients for frames m + 1, m + 2, . . .correspondingly.We further assume that a i , a j , a k are approximately equal for all frames in a GOP since the dependency between consecutive frames in the same GOP is not expected to exhibit significant variations.In general, the distortion reduction in frame n caused by the transmission of the ith layer in frame m, m < n, is a n−m i D i .Thus, as the temporal distance n−m between m and n increases the additional distortion reduction decreases exponentially.Assuming that the total number of frames in a GOP is M, the total distortion decrease is given by where a i D i is the distortion reduction caused in the m + 1 frame, a2 i D i is the distortion reduction in the m + 2 frame, and so forth.The above quantity is equivalently written as the sum where the first term is the distortion reduction in the current frame and the second term denotes the distortion reduction in all subsequent frames.If the total distortion reduction caused by the transmission of the ith layer in the mth frame can now be expressed as where D i C i is the cumulative distortion reduction1 that is caused in the subsequent frames due to the higher quality of the current (reference) frame m.Clearly, with this formulation, layers in frames lying in the beginning of a GOP are more important than layers of frames at the end of the GOP since the quality of the former affects the quality of the latter.The coefficients a i , and hence C i , which quantify the impact of the current frame on the quality of subsequent frames were calculated using the methods in [27]. FORMULATION OF THE REDUNDANCY ALLOCATION PROBLEM In order to address the problem of optimal allocation in MD video coding, it is important to derive expressions for the average video quality at the decoder and the total rate used in terms of the assignment strategy.Although in the experimental results section we consider the average PSNR over the entire sequence, in this section we will attempt to maximize the distortion reduction incurred by each frame of the GOP separately.This simplification will not significantly affect the optimality of the strategy derived here, while it will serve in addressing the problem of optimal assignment in a more rigorous way and in providing useful insight into the optimization procedure. Let us assume that each frame is coded into L layers, each using R i bits and contributing a reduction of distortion equal to D i relative to the quality of the current frame and C i D i , i = 1, . . ., L, to the quality of the next frames in the GOP, 2 when used for motion compensation for the next frames.We further assume that the curve appearing in Figure 5(a) is concave, namely, This assumption is generally valid for the case of our coder (a curve based on real data is shown in Figure 5(b)).We further note that lower-indexed layers correspond to coarse image information whereas high-indexed layers correspond to detail information.Between adjacent frames, coarse information is much more correlated than detail information.Thus, a i is fully expected to decrease with i.Since C i is obviously a monotone function of a i , this implies that: an observation which is also verified experimentally.This ensures that (7) will still hold, if we replace the D i 's with We wish to encode the initial video sequence into K descriptions, each of which will either provide a coarse reconstruction of the initial sequence by itself or improve a reconstruction based on one of the other descriptions.To this end, for every frame in the GOP we will assign a number of layers to each description in a way so as to maximize the distortion reduction incurred under a limited-rate constraint.We will consider the case of double-description coding (K = 2).The general case is studied in Appendix B. Let I = {1, . . ., L} denote the set of the possible values that the layer indices may assume.The problem of providing two descriptions for each frame in the GOP is equivalent to assigning a set of layer indices I 1 ⊂ I to the first and a set I 2 ⊂ I to the second description.Subsequently, the two descriptions will be transmitted over two communication links to the decoder.If A k represents the event that description k reaches the decoder and p denotes the probability that each stream is successfully delivered to the decoder (i.e., , four events exist for each frame: no descriptions are delivered.The probability of each of these events may be easily derived if we make the reasonable assumption that the events A 1 and A 2 are independent: ), d(B 0 ) denote, respectively, the distortion reduction at the decoder for the current frame when each of the events B 1 , B 2 , B 12 , and B 0 occurs.Their values may be calculated as Moreover, when at least one of the descriptions arrives at the decoder, the layers common to all descriptions will be used for the motion compensation of the next frame in the GOP, incurring an additional distortion reduction of C i D i for each layer.Let B 1|2 B c 0 denote the event that at least one description reaches the decoder and I ∩ I 1 ∩ I 2 denote the set of indices common to both descriptions.Then, Pr{B 1|2 } = p(2 − p) and the corresponding distortion reduc-tion will be Consequently, the expected distortion reduction, D e (I 1 , I 2 ), incurred at the decoder, when the index-assignment policy (I 1 , I 2 ) is used, will be and after some simple manipulations we arrive at where I (I 1 ∪ I 2 ) \ I ∩ is the set of indices contained in exactly one of the descriptions. The total rate, R(I 1 , I 2 ), used by the two streams is and may also be expressed as Assuming that the total rate used may not exceed a predefined rate budget R B , our purpose is to identify the indexassignment sets I 1 and I 2 , which do not violate the rate constraint and maximize the expected distortion reduction at the decoder max It is clear from ( 14) and ( 16) that the expected distortion reduction and total rate depend upon the sets I ∩ and I .Furthermore, the factor p in the expected distortion reduction ( 14) may be ignored for the optimization procedure for the sake of simplicity.Therefore, the maximization problem may be rephrased as Maximization problem Find disjoint sets I ∩ , I ⊂ I maximizing subject to the constraint The solution of the above problem will yield the optimal sets I ∩ and I , where I ∩ will contain the indices of the layers assigned to both streams and I will contain the indices assigned only to one of the streams.In order to obtain the optimal I 1 , I 2 , we need to further partition I into two disjoint index-assignment sets, one for each stream.It is clear from (14), however, that any such partition will yield sets I 1 , I 2 , inducing the same expected distortion reduction at the decoder; hence, the partition of I may be arbitrary (we may even assign the whole set I to only one of the streams).However, since balanced MD coding is sought, an acceptable partitioning should result in fairly equal total rates of I 1 and I 2 .In order to achieve this, the indices in I may be ordered in terms of decreasing corresponding rates R i and be assigned alternately to each stream. COMPLEXITY ANALYSIS If we were to solve the maximization problem (17) by exhaustively examining all possible realizations of I 1 and I 2 , this would involve 2 2L possibilities, since there are 2 L subsets of the index set I. Clearly, the optimal solution will be achieved by choosing any pair of sets I 1 and I 2 resulting in the same sets I * ∩ and I * , which solve the maximization problem described by (18) and (19).Hence, we only need to examine all possible realizations of disjoint sets I ∩ , I ⊂ I. Note that since there are 2 L possible subsets of the index set I, any subset A ⊂ I may be expressed as the binary max D = 0 (maximum distortion originally 0) I * ∩ = I * = 0 (optimal sets originally empty) for I ∩ = 0, . . ., 2 L − 1 (all possible realizations of I ∩ ) for I = 0, . . ., 2 (1) and I * (2) .The optimal index assignment is given by 2) . representation of a number between 0 and 2 L − 1, with the ith bit being 1, if i ∈ A and 0 otherwise.An exhaustive search algorithm which will determine the optimal solution I * ∩ , I * to the maximization problem is shown in Algorithm 1. Although this algorithm will always produce an optimal solution, the number of possible realizations of I ∩ and I , over which the search will be performed, is 3 L , still prohibitive even for moderate values of L. The NP-completeness of the maximization problem described by (18) and (19) can also be shown by formulating it as an integer (0-1) programming problem as shown in Appendix A. In view of these remarks, it would be desirable to establish some optimality results that will narrow the number of possible candidate solutions or devise techniques that would search through a smaller set of possible near-optimal solutions.To this end, the following will prove helpful.Lemma 1.If I ∩ and I are fixed and j ∈ I ∩ or j ∈ I , replacing layer j with layers of higher indices, such that their total rate does not exceed R j , would result in smaller expected distortion reduction. Proof.Assume that j ∈ I ∩ (the proof for j ∈ I is similar) and j 1 , . . ., If I ∩ is replaced by the set I ∩ (I ∩ \ { j}) ∪ { j 1 , . . ., j k }, then the rate constraint (19) would still be satisfied and the expected distortion reduction (18) would decrease by Using and (20) it is straightforward to show that the outcome of ( 21) is nonnegative; hence, this replacement would prove inefficient. The same also holds if we were to replace more than one lower-indexed layers with higher-indexed ones of smaller total rate.In other words, Lemma 1 suggests that, if possible (i.e., if the rate constraint is not violated), we should replace higher-indexed layers with lower-indexed ones with appropriate total rate.However, Lemma 1 might mislead us to assume that the optimal solution would consist of sets I * ∩ and I * comprising the lower-indexed layers, that is, This would not be true in case the rate margin R M R B − 2 i∈I∩ R i − i∈I R i can be filled by replacing one (or more) of the lower-indexed layers j with one or more higherindexed layers It is possible that in this case the resulting expected distortion reduction actually be larger, as shown in the example below. Counterexample 1.Let R B = 21.5, p = 0.8, C i = 0, i = 1, . . ., L, and R i , D i given by the following table: ) resulting in total rate 20.5 and expected distortion reduction 2.61.There is, however, a rate margin R M = R B − 20.5 = 1 that may be taken advantage of, if I ∩ or I is properly chosen.In fact, if the sets I ∩ = {2, 4} and I = {1, 4, 5} are used, the total rate matches the rate budget R B and the expected distortion reduction increases slightly to 2.62.This counterexample verifies that the optimal solution will not always be of the form (22); however, extensive experimentation showed that in most cases the sets I ∩ and I given by ( 22) provide a near-optimal solution, as was indeed the case in the previous example. An improved exhaustive search algorithm, which stems from this remark, would consider only sets I ∩ , I of the form (22).The number of possible candidates may be further reduced based on the following lemmas. Lemma 2. L * cannot exceed any certain value beyond which the sum Proof.This lemma is a direct consequence of the total rate constraint (19) for L * ∩ = 0. Lemma 3. L * cannot be smaller than any value for which the sum max D = 0 (maximum distortion originally 0) L * ∩ = L * = 0 (optimal sets originally empty) Proof. Lemma 4. For a given L * , the optimal value of L * ∩ is the largest integer l ≤ L * , for which the total rate for I ∩ does not exceed the remaining available rate, Proof.It is straightforward to prove that the more layers I ∩ comprises, the better the distortion reduction will be.Therefore, we should try to "fit" as many layers as possible in the remaining available rate. Lemmas 2-4 may be used to narrow down the exhaustive search space.In particular, Lemmas 2 and 3 suggest that we should examine values of L * , in a set {L 1 , . . ., L 2 }, while Lemma 4 suggests that for each of these values of L * there is a unique optimal value of L * ∩ ; hence, it suffices to examine only L 2 − L 1 + 1 < L cases.In view of these results, we can describe the improved exhaustive search procedure in Algorithm 2. The while loop in this algorithm searches for the maximum value of L ∩ fitting in the rate margin, since, as can be easily verified, the corresponding value of L ∩ for L + 1 will be smaller than that for L (the previous value of L ∩ ).Hence, the search is performed over L 2 − L 1 + 1 possible values of L * and L 1 possible values of L * ∩ and the complexity of the algorithm will be linear in L. In general, the improved exhaustive search algorithm will result in sets I * ∩ and I * , which do not exactly meet the rate constraint.In this case, there will be a rate margin R M R B − 2 i∈I * ∩ R i − i∈I * R i , which can be "filled" with smaller segments outside I * ∩ or I * .A further improvement would search for possible augmentations of I * ∩ or I * , so that the total rate be closer to the rate budget R B . As already stated, this algorithm will, in general, yield suboptimal yet near-optimal solutions to the maximization problem.A further (and more important) disadvantage of this algorithm is that, when applied in the general case of K > 2 descriptions, its complexity will be even higher.If we are to construct a low-complexity algorithm for the general case, we may resort to heuristics emanating from a continuous-case consideration of the problem.This is explored in the next section. EQUIVALENT CONTINUOUS PROBLEM By examining closely the discrete maximization problem described by ( 18) and ( 19), we first note that the sums i∈I∩ D i (1 + C i ), i∈I∩ R i and i∈I D i , i∈I R i are the distortion reduction and rate "measures" of I ∩ and I respectively.A further restriction arises from the requirement that I ∩ and I have to comprise intervals dictated by the available blocks and that partial blocks may not be used.If we relax this restriction, we may formulate a corresponding Continuous Maximization Problem, which is easier to solve. Assume that the curve appearing in Figure 5 represents a continuous, differentiable, nondecreasing, and concave function D(R) of the rate R. Then the derivative D (R) will be a well-defined, continuous, positive, and decreasing function of R, for every R ∈ R + .In a similar fashion, assume that the fraction of distortion reduction due to motion compensation is provided by a continuous decreasing function c(R) and that the curve corresponding to the products D i C i defines a function C(R) with derivative C (R) = D (R)c(R), which will have properties similar to those of D (R). 3 For any rate interval [r 1 , r 2 ], let μ R , μ D , μ C denote the following quantities: In practice, the number of intervals of the form [r 1 , r 2 ] is always finite (with an upper bound equal to the number of bits in the compressed bitstream).Obviously, measure of a union of a finite number of disjoint intervals of the form [r 1 , r 2 ] would equal the sum of the measures of these intervals.Thus, a continuous version of the discrete maximization problem described by ( 18) and ( 19) may now correspondingly be formulated as follows. Continuous maximization problem Find disjoint sets S ∩ , S ⊂ R + maximizing subject to the constraint With the further reasonable assumption that S ∩ and S are unions of closed intervals, properties stronger than Lemma 1 may be established for the continuous problem, leading to optimal solutions.Lemma 5.If S is fixed, the optimal S ∩ comprises the "smallest-rate region" of the remaining space R + \ S , that is, for some positive rate R ∩ . Proof.We will outline the general concept behind (26).Assume that ( 26) does not hold.Then there exist δ > 0 and (remove the second interval and add the first), then the rate constraint will still be met and the increase in expected distortion reduction (24) will be where (α) results from r 2 − r 1 > 0 and the fact that D (•) and c(•) are decreasing and (β) involves a simple change of integration variable.It follows, therefore, that S ∩ will not be optimal (since it is outperformed by S ∩ ) unless it is given by ( 26) for some R ∩ . In a similar manner, it is possible to establish an equivalent property for S .Lemma 6.If S ∩ is fixed, the optimal S comprises the "smallest-rate region" of the remaining space R + \ S ∩ , that is, Furthermore, concavity of D(•) implies the following. Proof.This is true because the contribution of S ∩ in the expected distortion reduction (24) involves the factor 2 − p > 1 and the function C(R) ≥ D(R), R ∈ R + .Hence, incorporating the smaller-rate interval [r 1 , r 1 + δ] in S ∩ and the higherrate interval [r 2 , r 2 + δ] in S will yield smaller expected distortion, as is easily be verified. Lemmas 5, 6, and 7 suggest that the jointly optimal sets S * ∩ , S * will be intervals of the form for some R ≥ R ∩ ≥ 0. In terms of the original maximization problem, (28) would provide the optimal solution if the (0-1) constraint for x is relaxed, namely, if assignment of partial blocks is allowed. In view of ( 28), the equivalent continuous problem may be restated as follows. Continuous maximization problem This is a simple Lagrangian maximization problem with optimal solution R * ∩ , R * R satisfying the constraint (30) at the boundary.The optimal R * ∩ should satisfy which after some simple manipulations translates to the condition Observe that, since D (•) and c(•) are decreasing, φ(•) will be continuous and increasing in the interval [0, R B /2] and the continuous maximization problem will not involve local maxima.Also, the smallest value of φ(•) will be φ(0 0) and the largest value will be the optimal value for R * ∩ will be φ −1 (1 − p).Otherwise (32) does not have a solution and optimality is achieved either at 0 or R B /2.In general, we can write while R * = R B − R * ∩ .Returning to the discrete maximization problem, it is reasonable to assume that a near-optimal solution will resemble that of the equivalent continuous maximization problem, especially for large values of L. This means that a nearoptimal choice for the index assignment sets would be I ∩ = {1, . . ., L * ∩ }, I = {L * ∩ + 1, . . ., L * }, where L * ∩ and L * would be such that This consideration suggests Algorithm 3 above.The advantage of this algorithm lies in that it involves fewer calculations and terminates sooner that the improved exhaustive search algorithm.It is clear, however, that the price paid for its reduced complexity, which is important in cases of real-time applications, is its inferior performance compared to the exhaustive search algorithms. Let us also note that the implementation of the fast search algorithm involves serial search through all values from 0 to the terminating, estimated optimal, value of L * ∩ .A further improvement would involve a binary search modification of this algorithm, according to the actual values of φ(L ∩ , L ) at the boundaries of the binary-search interval. EXPERIMENTAL RESULTS The proposed multiple description video coding scheme was experimentally evaluated for the transmission of the Y component (15 frames/second) of the standard test sequence "Foreman" over two channels.Each frame was coded in two descriptions.Motion vector information was duplicated in both descriptions.The proposed redundancy allocation Algorithm 3 of the preceding section was applied for video transmission over two channels of total capacity 128 Kbps and for three different probabilities of description arrival: p = 0.8, 0.9, 0.95, or equivalently three probabilities of description loss equal to 20%, 10%, 5%.The number of frames in each GOP was chosen with respect to p as suggested in [28].The target rate R B for each frame was determined by allocating to intra-frames a rate equal to four times the rate allocated to interframes.The resulting descriptions, as shown in Table 1 for the first five frames of the sequence, are remarkably "balanced," that is, they have approximately equal size and yield almost equal reconstruction qualities. In the present work, we assume that descriptions that arrive at the decoder do not contain bit errors.We examine two types of transmission scenarios: in the first scenario, we assume that the channels retain their status during the entire transmission.In this case, the parameter p serves as a means to control the redundancy and is not directly associated with the condition of the channel.In the second scenario, we assume that the channels go on and off during transmission.In the latter scenario, it is possible that both descriptions of a frame are lost.In such a case, the decoder uses the most recent reference frame that is available.For each frame, the peak-signal-to-noise-ratio is used as a measure of the reconstruction quality (in dB) Following the approach adopted in [29,30], the reported mean PSNR values are computed by averaging decoded MSE values and then converting the mean MSE to the corresponding PSNR value rather than averaging the PSNR values directly. In the first transmission scenario, the coding of the "Foreman" sequence into two descriptions is simulated under the respective assumption that the channels are available or unavailable during the entire transmission.As expected, the central distortion in the proposed scheme that allows drift accumulation, which we will term multiple description wavelet video coder (MDWVC), is superior in comparison to the proposed drift-free system, termed DF-MDWVC.This was expected since when both descriptions are available, drift is eliminated anyway.On the other hand, the side distortion appears to be lower in the drift-free system.The performance of MDWVC is shown in Figure 6.The redundancy rate-distortion performance of our coders is shown in Figure 7.As seen, DF-MDWVC and MDWVC reach similar performances for redundancy greater than 15%.For lower redundancies, the drift-free system performs worse due to the very low quality of the reference frames. In the second simulation, in which the channels may go on and off from frame to frame, we tested our systems under identical description loss patterns.For each frame, one, two, or none of the descriptions was lost.As seen from Figure 8 and Tables 2 and 3, the drift-free system is much more reliable and demonstrates no abrupt changes in its performance, contrary to MDWVC which demonstrates significant variations in the video quality it delivers.In addition, both schemes demonstrate significant gains over the single description scheme which appears to collapse very frequently due to description losses.In Figure 8(d), we report the performance of a scheme that is based on H.264 and uses the FMO for transmission of video over two channels.This scheme uses P-frames and two FMO slices.As seen, despite the fact that the H.264-based scheme uses advanced error concealment techniques at the decoder, the reconstruction quality it delivers exhibits significant variations in comparison to the quality achieved by our drift-free scheme. Reconstructed frames obtained by simulating the transmission of 180 frames of the "Foreman" sequence at 15 frames/second over two channels of total capacity 128 Kbps and probability of description arrival equal to 0.9 using the above systems are displayed in Figure 9.The reconstruction displayed in Figure 9(c), achieved using the driftfree system, is qualitatively more pleasant than the reconstruction using MDWVC.This proves that, in practical cases, the drift-free system can be a better choice even though MDWVC operates better at low error rates.The image reconstructed using the single description scheme exhibits the worst performance. In Figure 10, we present the reconstruction quality obtained using the drift-free system for the case of transmission over four channels of total capacity 128 Kbps and probabilities of description loss equal to 20%. CONCLUSIONS We presented a wavelet-based framework for the encoding of video in multiple descriptions.The generation of multiple descriptions was performed so that drift is eliminated at the decoder side.The proposed framework is flexible and allows the encoding of video into an arbitrary number of descriptions.The resulting framework is endowed with the capability for drift-free reconstruction regardless of the number of descriptions that arrived at the decoder.Three algorithms were also presented for the optimal allocation of APPENDICES A. INTEGER (0-1) PROGRAMMING FORMULATION Then, the sets I ∩ and I are determined by the vectors x ∩ [x ∩ 1 , . . ., x ∩ L ] T and x [x 1 , . . ., x L ] T , respectively, where A T denotes the transpose of matrix A. If we adopt this notation, (18) may be written as and constraint (19) as with r [R 1 , . . ., R L ] T .Property I ∩ ⊂ I may be written as where 1 L is the L × 1 unity vector and inequalities involving vectors are meant in the percomponent sense. In order to find the optimal solution, it suffices to find binary-valued vectors x ∩ and x minimizing (A.1) subject to the constraints (A.2) and (A.3).This is an integer (0-1) programming problem and can be formulated by defining where I L is the L × L identity matrix, x and d are 2L × 1 vectors, C is a (L + 1) × L matrix, and b is a (L + 1) × 1 vector.In view of these definitions, the maximization problem may be expressed as an integer-programming problem. Integer (0-1) programming problem Find (0-1)-valued vector x such that Although several techniques exist for the solution of integer-programming problems, it is well known that integer-programming problems are, in general, NP-complete and, most of the times, exhaustive search over all possible realizations of binary-valued vector x is the only procedure that guarantees optimal solution.Even if a cutting-plane or branch-and-bound technique is used, it does not guarantee that the number of operations will be less than exponential in L. B. THE GENERAL MULTIPLE DESCRIPTION PROBLEM In the general case, the original frame comprises L layers and we need to form K ≥ 2 descriptions so that a rate constraint is met and the expected distortion reduction at the decoder is maximized.Conforming to the notation used for the double-description case, we define the index sets I k , k = 1, . . ., K, where each I k describes the assignment of layers to description k, and the events A k = {Description k reaches the decoder}, k = 1, . . ., K. The index-assignment sets I k , k = 1, . . ., K define 2 K disjoint subsets of the index set I = {1, . . ., L}, which can be written as where the subscript x = [x 1 , . . ., x K ] T is a (K × 1) binaryvalued vector and For every x ∈ {0, 1} K , the set J x comprises the indices belonging to the sets I j with x j = 1.The original indexassignment sets I k , k = 1, . . ., K can then be expressed in terms of the collection { J x } x∈{0,1} K as x k denote the weight of the binaryvalued vector x and for every index set representing the total rate and distortion reduction of the layers with indices in A. The total rate sent to the decoder can be expressed as where (α) comes from (B.3) and the fact that the sets J x are mutually disjoint and we can derive (β) by observing that each sum i∈ Jx R i appears exactly w(x) times in the previous expression of the total rate.For a given x ∈ {0, 1} K , assume that x j1 = • • • = x jw(x) = 1 and the rest are zero.In order to express the expected distortion reduction at the decoder in terms of the collection { J x } x∈{0,1} K , we observe that the distortion at the decoder will improve by D( J x ) (layers with indices in J x will be used) whenever the event A x {description j 1 description j 2 or • • • description j w(x) is delivered} occurs, that is, of the descriptions reaches the decoder) whose probability is 1 − (1 − p) K .Therefore, the overall expected distortion reduction at the decoder will be At this juncture, observe that both the total rate (B.5) and the expected distortion reduction (B.8) can be expressed as linear functions of the {R( J x )} x∈{0,1} K and {D( J x )} x∈{0,1} K , respectively, with coefficients depending only on the weight of the index vector x.Therefore, we can group all sets J x with the same weight and define the new (fewer) sets J k = x∈{0,1} K :w(x)=k J x , k = 0, . . ., K, (B.9) each set J k containing the layer indices assigned to exactly k descriptions.Also, observe that the set J 0 = J 0 has a zero coefficient in both (B.5) and (B.8); hence, it does not contribute to the total rate or expected distortion reduction.By reformulating (B.5) and (B.8), the maximization problem for the general multiple description case may be stated as follows. General maximization problem Find disjoint sets J 1 , . . ., J K ⊂ I maximizing D J 1 , . . ., J K = 1 − (1 − p) K The integer-programming formulation of the general maximization problem would involve K binary-valued L × 1 vectors x k , k = 1, . . ., K, with and the requirement that the J k , k = 1, . . ., K be disjoint can be written as Let us define where x and d K are KL × 1 vectors, C K is a (L + 1) × KL matrix, b K is a (L + 1) × 1 vector and the L × 1 vectors r, d, c are those defined in the double-description integerprogramming formulation.Then, the integer-programming formulation of the general multiple description problem will be as follows. General integer (0-1) programming problem Find (0-1)-valued vector x such that max d As is clear from the integer-programming formulation, the complexity of the general maximization problem may be as high as 2 KL .Heuristics similar to those proposed for the double-description case may be used for an estimate of the optimal index-assignment scheme, based on the general equivalent continuous problem, which can be easily formulated from (B.10) and (B.11).It is reasonable to conjecture that the heuristics stemming from the equivalent continuous general maximization problem will provide solutions deviating from the optimal one even more as K increases. Figure 2 : Figure 2: Block diagram of the decoder. Figure 3 : Figure 3: (a) Assignment of the blocks of a wavelet representation for the case of two descriptions.The bitstreams corresponding to the blocks may be included in one or more descriptions.(b) Representation of redundant and nonredundant part of the stream for the case of two descriptions. Figure 5 : Figure 5: (a) Comprising layers and induced distortion reduction, (b) distortion reduction as a function of rate for a frame of "Akiyo" using the source coder of Section 3. Figure 8 : Figure 8: Reconstruction quality for the "Foreman" sequence when the channels go on and off during transmission and a probability of error equal to (a) 5%, (b) 10%, (c) 20%, and (d) transmission based on H.264 using flexible macroblock ordering. 6 )= 1 = 1 − Assuming that the events A k , k = 1, . . ., K are independent and Pr{Ak } = 1 − Pr{A c k } = p, we can calculate its probability Pr A x = 1 − Pr A c x (1 − p) w(x) .(B.7)If we also define C(A) i∈A D i C i for A ⊂ I, the distortion reduction due to motion compensation based on the layers common to all descriptions will be C( J 1K ), 1 K being the (K × 1) unity vector.The distortion reduction due to motion compensation is conditional on the event A 1K (at least one EURASIP Journal on Applied Signal Processing Figure 9 : Figure9: Reconstructed frame for the transmission of the "Foreman" sequence, p = 0.9, over two channels of total capacity 128 Kbps: (a) original "Foreman" frame, (b) reconstructed using the coder without drift control (25.84 dB), (c) reconstructed using the drift-free coder (28.81 dB), and (d) reconstructed using the single description coder (25.78 dB). Figure 10 : Figure10: Reconstruction quality obtained using the drift-free system with four descriptions transmitted over channels with probability of loss equal to 20%. 1 − (1 − p) k D J k (B.10) subject to the constraint R J 1 , . . ., J K = K k=1 kR J k ≤ R B .(B.11) Michael G. Strintzis received the Diploma in electrical engineering from the National Technical University of Athens, Athens, Greece, in 1967 and the M.A. and Ph.D. degrees in electrical engineering from Princeton University, Princeton, NJ, in 1969 and 1970, respectively.He joined the Electrical Engineering Department, University of Pittsburgh, Pittsburgh, PA, where he served as an Assistant Professor from 1970 to 1976 and an Associate Professor from 1976 to 1980.During that time, he worked in the area of stability of multidimensional systems.Since 1980, he has been a Professor of electrical and computer engineering at the Aristotle University of Thessaloniki, Thessaloniki, Greece.He has worked in the areas of multidimensional imaging and video coding.Over the past ten years, he has authored over 100 journal publications and over 200 conference presentations.In 1998, he founded the Informatics and Telematics Institute, currently part of the Centre for Research and Technology Hellas, Thessaloniki.He was awarded the Centennial Medal of the IEEE in 1984 and the Empirikeion Award for Research Excellence in Engineering in 1999. L − 1 (all possible realizations of I ) if I ∩ AND I = 0 (check if sets are disjoint) into two fairly equal-rate subsets I * * Table 1 : Description size (bytes) ratio and ratio of the two descriptions, for several frames of the sequence "Foreman" (p = 0.9, R total = 128 Kbps). Table 3 : Performance comparison.Standard deviation of reconstruction quality is reported.
2017-07-05T21:54:10.215Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "a13e3443ad88b41865c6767defd2ee5409f0e326", "oa_license": "CCBY", "oa_url": "https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1155/ASP/2006/83542", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a13e3443ad88b41865c6767defd2ee5409f0e326", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
153787761
pes2o/s2orc
v3-fos-license
Presidential Elections and Corruption Perceptions in Latin America This paper argues that perceptions of corruption in Latin America exhibit predictable fluctuations in the wake of presidential turnover. Specifically, presidential elections that result in the partisan transfer of power are normally followed by a surge-and-decline pattern in perceived corruption control, with initial improvements that fade with time. The causes are multiple and stem from the removal of corrupt administrations, public enthusiasm about administrative change, and the relative lack of high-level corruption scandals in the early phases of new governments. A statistical analysis of two widely used corruption perceptions indices demonstrates the pattern for eighteen Latin American democracies from 1996 to 2010. Both indices exhibit a temporary surge (of about two years) after turnover elections, while no such change follows reelections of incumbent presidents or parties. The theory and results are relevant for understanding public opinion in Latin America and for the analysis of corruption perceptions indices. Introduction Although democracy offers regular opportunities for citizens to hold their public officials accountable, it is clear that political corruption -the misuse of governmental power for personal or political profit -can persist in democratic systems. By themselves, elections are weak controls on corruption because politicians seek to keep their corruption out of public view and because electoral defeats due to (exposed) corruption are neither certain enough nor costly enough to serve as a strong deterrent. Nevertheless, elections often generate considerable enthusiasm about corruption control, especially when they result in a partisan transfer of presidential power. Sometimes this is because the outgoing administration was notably corrupt and its removal provides some relief. Other times, buoyant perceptions of corruption control may stem from the public's enthusiasm for the new president or the relative lack of highlevel corruption scandals in the early stages of new governments. In any case, however, the enthusiasm seldom lasts: presidential honeymoons end, governments once untainted become implicated in scandals both small and large, and beliefs that new officials are less corrupt than their predecessors are often abandoned. In other words, presidential turnover is frequently followed by a surge-and-decline pattern in public perceptions of corruption control, with initial improvements fading over time. This paper 1 develops this argument about corruption perceptions cycles and demonstrates that a surge-and-decline pattern does follow partisan turnover in Latin American presidencies. A statistical analysis of two widely used corruption perceptions indices -the World Bank's Control of Corruption Index and Transparency International's Corruption Perceptions Index -for eighteen Latin American democracies over the period 1996-2010 shows there is an improvement in perceived corruption control in the wake of turnover elections but not after elections that result in the return of the incumbent president or party. The "turnover surge" typically lasts for two years before corruption perceptions shed some or all of their gains. The pattern is robust to various controls, including changing rates of economic growth and major corruption scandals. The theory and results are relevant for understanding public opinion, support for democratic institutions (Canache and Allison 2005;Seligson 2006), and the politics of anticorruption reforms in Latin America. They also have important implications for the analysis of corruption 1 The author thanks the two anonymous reviewers for their comments and suggestions. perceptions indices by scholars, by organizations such as the Millennium Challenge Corporation (which allocates its foreign aid according to country performance in such indices), and by anticorruption NGOs and the media. Consider, for example, the interpretation of Costa Rica's 2007 Corruption Perceptions Index (CPI) offered by Transparency International (TI) and reprinted by the Latin Business Chronicle (2007): The case of Costa Rica may serve to illustrate the importance of having autonomous and respected institutions in place that can help to adequately fight corruption. Just a few years ago the country experienced a decrease in its CPI score, which could be attributed mainly to the fact that former presidents and high level officials have been found to be involved in bribery scandals. The independence and actions of the justice system in taking up the cases possibly contributed to an improved image of the government and politicians in the eyes of the expert community responsible for rating the countries listed in the CPI. 2 TI's guess for why Costa Rica's CPI score surged in 2007 may be as good as one that points to the previous year's turnover election or to some other development. However, that the surge immediately followed the 2006 turnover election means that it fits a pattern that is observed throughout Latin America and thus could have been predicted by that event alone. Furthermore, the election was exactly the type that this paper argues is likely to improve corruption perceptions because a major cause of the voters' rejection of the ruling party's candidate -who received less than 5 percent of the vote -was the bribery scandals that damaged the outgoing government. Indeed, from this paper's perspective, what is distinctive about Costa Rica's post-2006 CPI is not that it increased markedly in 2007 and again (to a lesser degree) in 2008 but that it did not reverse course in the second half of the presidential term. This may be because the country's score was still rebounding from its decline in the late 1990s, when various scandals tarnished the presidency. Or it could be because at the midpoint of the term, around the time that the turnover surge tends to ebb, Costa Rica implemented the Dominican Republic-Central America-United States Free Trade Agreement (CAFTA-DR). Or it may be because the government, led by an unusual politician (Nobel Peace laureate Óscar Arias), actually oversaw a significant reduction in the country's corruption. Whatever the reason, the recognition that corruption perceptions tend to surge and decline follow-2 TI's report was titled 2007 Corruption Perceptions Index Regional Highlights: Americas. ing turnover elections reorients our understanding of corruption perceptions and their change over time. This paper is organized as follows: The first section considers the temporal analysis of corruption perceptions indices. The second section presents the argument for corruption perceptions change in the wake of partisan turnover in the presidency. 3 The remaining three sections describe the data, provide the empirical analysis, and discuss implications. On Temporal Changes in Corruption Perceptions Indices After hypothesizing corruption perceptions change in the wake of certain elections, this study examines temporal changes in two corruption perceptions indices. Most other analyses of these measures differ in two respects: they aim to study corruption -not perceptions -and analyze the data cross-sectionally. 4 The first difference is important because the controversy about the indices centers on whether they are good or poor proxies for corruption (see Morris 2008;Olken 2009). There are various reasons why a survey of corruption perceptions (P) may misrepresent the amount of corruption (C) in one or more political systems. So to minimize measurement error, corruption perceptions indices (I) are constructed by combining various surveys, including both "international" surveys of professional analysts and "domestic" surveys of households. Consequently, the indices are (i) only likely to be sensitive to trends and events that similarly affect the perceptions of both types of respondents and (ii) not likely to be colored by respondents' partisan or ideological leanings. However, there are still phenomena that may lead the various audiences to subscribe to overly optimistic or pessimistic perceptions of corruption control. For example, a corruption scandal that receives widespread media attention can worsen P while C remains unchanged or even diminishes. 5 Economic conditions can also color perceptions of 3 Throughout, "turnover" is used interchangeably with "government defeat" and "partisan turnover." The use of the latter is not meant to imply that all turnover elections must defeat or elect a member of a consolidated party; it is used merely to help distinguish elections that result in a total transfer of power from elections that install a successor (designated or de facto) to a term-limited or retired president, which are considered "reelections." Sections 2 and 3 discuss these distinctions further. 4 This literature is reviewed by Treisman (2007) and Lambsdorff (2006). 5 Although scandals can sometimes reveal information about the amount of corruption in a political system -and thus it may be rational to update one's corruption, even though the index producers intend for their measures to be independent of "halo effects" -whereby evaluations of a country's politics or economics falsely tinge corruption perceptions. 6 These phenomena can be problematic for analyses that use perceptions data to study corruption; however, they are not problematic for studies that focus on perceptions. The key difference between a temporal and a cross-sectional analysis of a perceptions index lies in the type of measurement error that it confronts: while a survey's mismeasure of differences across countries affects cross-sectional inference, the mismeasure of a country's change over time affects temporal inference. Although index producers pay more attention to intrasurvey error than they do to comparability across years, intrasurvey error remains. Cross-sectional studies confront this in the same way as the temporal analysis below: by examining index differences in the aggregate. 7 No individual index difference -between two countries or between two years -is given serious attention. 8 Presidential Turnover and Corruption Perceptions The course and outcome of presidential elections are of sufficient importance and interest to influence public beliefs about a country's governance, including its corruption control. There are multiple ways for turnover elections to influence corruption beliefs, including indirect effects that occur via the influence of turnover on actual political corruption. A mechanism of this type occurs when voters dismiss a government for its corruption and elect in its place an administration that engages in less corruption. In such cases, the direction of causality between turnover and corruption is largely in reverse because it is a government's corruption that contributes to both its ouster and the behavior of the corruption beliefs when they occur -they may also be poor indicators of corruption and/or its change over time. 6 Kurtz and Schrank (2007a, b) and Kaufmann, Kraay, and Mastruzzi (2007a, b) debate the presence of halo effects. 7 Index values (as opposed to differences) are also ignored because they are not meaningful by themselves. 8 Because the World Bank's index includes measures of uncertainty, some individual index differences can be treated as meaningful Mastruzzi 2004, 2006). next administration. 9 Turnover can also reduce corruption exogenouslythat is, even when corruption is not a reason for the election result. More importantly, an exogenous turnover effect is more likely to reduce corruption than increase it because it is common for corruption to grow with government tenure, especially in countries without robust anticorruption institutions. This may occur because the government's priorities shift from reshaping public policy to simply maintaining power, because officeholders become increasingly audacious and believe that they can get away with (more) corruption, or because elected and administrative positions can increasingly attract rent seekers rather than policy advocates. 10 In any case, a change of administration can reverse the trend, at least temporarily. Notwithstanding the potential for turnover to decrease corruption, it is easy for the public to overestimate the difference that turnover makes. One potential cause of misperception is that the early stages of new governments tend to feature relatively few corruption scandals that implicate high-level incumbents. In large measure, this is because corruption is conducted so as to foil discovery. Therefore, and also because the whistleblower or journalist may deliberate before exposing corruption, there is usually a considerable delay between the inception of a corrupt scheme and its public revelation. The lack of executive branch corruption scandals during the early stages of new governments may also have something to do with a lack of within-government power struggles (see Balán 2011). Although Balán does not identify postturnover periods (or any other part of the election cycle) as particularly lacking in power struggles, it is possible that they are relatively lacking in such feuds -in which case his theory would predict few scandals. Either way, major corruption scandals in Latin America seem uncommon during the first year or so after a transfer of power. As regards the influence-peddling scheme under Fernando Collor de Mello, the mensalão under Luiz Inácio Lula da Silva, MOP-Gate and the Caso Coimas in Ricardo Lagos's Chile, 9 The improvement that accompanies the removal of a corrupt administration may occasionally be augmented by anticorruption reforms that are enacted by the new administration. Both mechanisms -the election-motivated provision of public goods and the electoral defeat of corrupt governments -are familiar. They are key reasons why democratic systems are believed to outperform nondemocratic systems in corruption control (Rose-Ackerman 1978; Montinola and Jackman 2002). 10 Of course, these are also reasons why corruption is said to become more commonplace with long-standing rule by a party or government. The notion that corruption may often ebb with turnover complements the idea that corruption often grows with government entrenchment. the bribery scandal that precipitated Alberto Fujimori's resignation, the 2004 bribery revelations that implicated Costa Rica's Miguel Ángel Rodríguez and Rafael Ángel Calderón, and the Caso Skanska, which raised questions about Néstor Kirchner's government, not one of these well-known scandals broke during the respective president's first year in office. Dilma Rousseff's inaugural year provides a contrast as at least six of her ministers were implicated in corruption scandals. But Rousseff's election also differed from Lula's, Menem's, and Fujimori's in that it represented continuity, not a partisan transfer of power. Indeed, like Rousseff herself, four of the six officials that she deposed for corruption during her first year as president had been ministers in Lula's government. A second reason why a public might overestimate a new government's control of corruption relates to its enthusiasm for the new president and the change of political direction. Of course, it is not unusual for opposition candidates to claim that their administrations will reduce corruption; nor is it unusual for the public to believe such claims, especially when they are made by candidates who have not yet held elected office at the national level. However, corruption need not be a salient campaign issue for enthusiasm to color the public's perceptions of corruption control -whenever there is widespread excitement for a new president the public is likely to subscribe to sanguine beliefs and expectations about government. The various ways that turnover can improve corruption perceptions are not mutually exclusive; in fact, it is likely that they often occur simultaneously. 11 This complicates any attempt to ascertain the degree to which changes in perceptions stem from reduced corruption, a lack of corruption scandals, and/or public enthusiasm for new presidents. The difficulty is not eased by the rather temporary nature of each effect -at least, none is destined to buoy perceptions for long. However, the combination does increase the likelihood that corruption perceptions will improve with any given turnover election. 11 Each mechanism is more pertinent to "high-level" corruption -including influence peddling, "pay-to-play" schemes, and other types of malfeasance by high-level officeholders -than to "petty" types of corruption, such as bribes demanded by bureaucrats or police. Turnover elections are hypothesized to primarily affect the former type, which throughout this paper is referred to as "political" corruption. The indices analyzed below aim to account for both types of corruption, but they seem to emphasize political corruption (e.g., see footnote 12). Other measures, such as Transparency International's Bribe Payers' Index, are more focused on routine corruption. At the same time, there are not strong reasons to expect a similar change after the reelection of an incumbent president or government. The public expects more continuity than change whenever a government is reelected, even when it is headed by a successor to a term-limited or resigned president. Reelections are also less likely to either disrupt corrupt practices or to introduce a period of scandal scarcity. So, the mechanisms that foster improved perceptions with turnover elections do not accompany reelections, at least not to the same degree or frequency. Therefore, I consider two testable hypotheses about presidential elections and corruption perceptions -one for turnover elections and one for reelections: H1: Following presidential elections in which the incumbent government or ruling party is defeated, perceptions of corruption control will improve (i.e., register less corruption); however, it will be common for some or all of those gains to be reversed as the new president's term progresses. H2: Following presidential elections that are won by the incumbent government or ruling party, perceptions of corruption will not change. Both hypotheses predict tendencies, not what will necessarily occur after every election. Indeed, contrary to H2, it is possible for perceptions to worsen after a government "steals" reelection or after a campaign in which allegations of government corruption have surfaced. And, contra H1, corruption perceptions can fail to improve with a transfer of power if the new president's early decisions about personnel or policies undermine public confidence or if the change returns to government a president or party with a reputation for corruption. Typically, however, corruption perceptions will improve with turnover, not with reelection. Data To test H1 and H2, I examine one-year and multiyear changes in two annual corruption perceptions indices, one from the World Bank and the other from TI. Both are produced by combining corruption surveys and measures from various independent organizations, including risk management firms, NGOs, governments, and academic institutions. In 2010 TI relied on seventeen sources to compute the Corruption Perceptions Index (hereafter TICPI) and the World Bank used thirty-one sources to construct the Control of Corruption Index (hereafter WBCCI). 12 The reason these organizations "average" various surveys together is that any one survey may contain bias, measurement error, or an overemphasis on a particular type of corruption. The indices use different methods to aggregate the underlying data. TI's method computes percentile ranks for each country on each survey and then averages those ranks together to obtain an average score and rank for each country (see Transparency International 2011). The scores range between zero and ten, with higher numbers corresponding to less perceived corruption. To construct the WBCCI, Kaufmann, Kraay, and Mastruzzi use an "unobserved components model" to map each of the k surveys onto a common unobserved measure of corruption perceptions, which is taken to have a cross-national mean of zero and unit standard deviation. In this process, some surveys map more "neatly" to the measure than others, exhibiting smaller variance in the error term. The WBCCI is the weighted average of the k estimates of the unobserved measure, each weighted inversely to the size of its error variance (see Kaufmann, Kraay, and Mastruzzi 2010 for additional details). This method is more complicated than the TI method, but Kauffman, Kraay, and Mastruzzi note that it has several benefits when compared to other methods -one being the precision that comes from "maintaining some of the cardinal information in the underlying data" (Kaufmann, Kraay, and Mastruzzi 2010: 16). They also note that the index can be meaningfully compared across time, barring two potential methodological issues. First, because the weighting of the various indicators changes from year to year, it is possible for across-year differences in the index to result from different weighting schemes rather than different survey responses. Second, because the method fixes each year's mean cross-national score at zero, across-year changes in the index for a particular country could stem from the average country becoming either more or less corrupt. Nonetheless, Kaufmann, Kraay, and Mastruzzi (2004, 2010 demonstrate that these two issues are of little concern. Because both the TICPI and WBCCI aggregate data from various surveys, neither measures perceptions on a particular date. It can be assumed that major events that occur early in the year will influence 12 The WBCCI is one of the six World Governance Indicators developed by Kaufmann, Kraay, and Mastruzzi (2004, b, 2010. They describe the WBCCI as "captur[ing] perceptions of the extent to which public power is exercised for private gain, including both petty and grand forms of corruption, as well as 'capture' of the state by elites and private interests" (see <http://info. worldbank.org/governance/wgi/pdf/cc.pdf> (1 August 2012). most of that year's surveys and thus that year's indices, while events that happen later in the year are more likely to influence the indices in the subsequent calendar year. It is, however, impossible to be any more precise about whether a particular event (e.g., turnover election) will have a greater impact on that year's or the following year's index. This imprecision means that any relationship between events and indices is likely to be "noisy." It thus challenges the present paper's main hypothesis (H1), which expects index change in the calendar year that follows a turnover election year. The WBCCI years analyzed here are 1996-2010. 13 Before 2002, the index was only produced biennially. For some of the analysis below, the missing data for 1997, 1999, and 2001 are imputed by using the mean of the surrounding two years. This is useful to expand the number of years available for analysis, but it could introduce autocorrelation in regression models that analyze annual changes. For this reason, the analysis of oneyear WBCCI changes uses only the post-2002 data. When multiyear changes are examined, pre-2002 data are included. Although the TICPI has been issued annually since 1996, it did not cover all Latin American countries in its early years -small countries, in particular, were left out. I do not impute TICPI scores because most missing observations are not preceded by an observation year. Coding Elections Four variables are used to account for presidential elections. Their coding for each country-year is shown in Bolivia 2003;Ecuador 1997Ecuador , 2000Ecuador , and 2005Peru 2000;and Honduras 2009. 14 The coding of these variables requires some comment in particular cases. In Honduras 2009 there was both a presidential election and a partisan transfer of power; but the November election occurred after the June military coup that ousted Manuel Zelaya. The country-year is coded Irregular Change=1 and Turnover=1 to reflect the coup and the change in administration, but given the short time between the coup and the election the year is not coded Reelection=1. Peru 2000 is coded Reelection=1 for Alberto Fujimori's reelection and Irregular Change=1 for his resignation several months later. The subsequent year (when Alejandro Toledo was elected president) is coded Turnover=1. In other words, the 2000 installation of interim president Valentín Paniagua is not coded as a turnover event. This and the previous decision prevent the same country-year from being both Reelec-tion=1 and Turnover=1. Argentina 2001 is coded Turnover=1 and Irregular Change=1 because of the resignation of President Fernando de la Rúa of the Radical Civil Union party and the installation of the Peronist Eduardo Duhalde as interim president. Argentina 2003 is coded Reelection=1 due to the election of Peronist Néstor Kírchner; even though Kirchner was not Duhalde's preferred successor, it seems appropriate to mark the election as continuity rather than change and to reserve Turnover=1 for transfers of power between Peronists and Radicals. A similar question surrounds the 1999/2000 victory of Ricardo Lagos in Chile. 15 Because Lagos was a member of the incumbent coalition and had been a minister in the outgoing administration, Chile 1999 is coded Reelection=1 and Turnover=0. However, the election of a socialist president did mark something of an ideological change from the previous Christian democrat. Perhaps more significantly, it was a milestone in the post-Pinochet era -one with enough significance to influence opinions about the state of Chilean democracy. . See text for a description of these variables. O represents a nonconsecutive reelection (i.e., the election of a nonincumbent who once had been president). Other events to note include Paraguay 2000 and Mexico 1997, which saw historic defeats of long-ruling parties -the former in a vicepresidential election and the latter in midterm elections. Tellingly, the corruption perceptions indices for both countries improved in the following year. However, so as not to appear ad hoc, the coding scheme does not account for these events. Lastly, I code Venezuela 2000 as Reelection=1. Though not a regular election year, Hugo Chávez's victory in the recall election was an event of enough significance to demand parity with regular presidential reelections. The fourth election variable, ALBA, provides a way to differentiate turnover elections by the ideological stance or populism of the presidentelect. ALBAit =1 if Turnover it =1 and the new administration subsequently joined Chávez's Bolivarian Alliance (ALBA), a counter to the United States and its proposal to create an Americas-wide free trade agreement; = 0 otherwise. Latin America had seen many leftist presidents during the previous fifteen years, and many observers argue that the difference between left and right in the region has been less stark than the difference between the "two lefts" -one a moderate, centrist camp (typified by Lula and Michelle Bachelet) and the other a more statist, nationalist, or populist camp that is more ideologically opposed to the United States (typified by Chávez). 16 Despite how often this type of distinction is made, there is no clear, accepted method to identify particular governments as belonging to one camp or the other. ALBA may not perfectly identify what many consider to be the more statist left, but it is a useful operationalization because ALBA includes governments that are commonly said to be in that camp and because it provides an unambiguous coding criterion. Including this variable in the analysis helps control for the possibility that the indices are influenced by the ideological stance or populism of the president-elect. In particular, it could be that many of the foreign surveys used to make the indices negatively viewed these elections, thus resulting in a smaller-than-average turnover surge. Analysis The first step in the analysis is to answer the following question: Are one-year index changes positive following turnover elections? The answer is yes. The average one-year index change (¨Y1 it = index it -index i(t-1) ) that occurs in years that immediately follow turnover election years is larger than zero in one-tailed t-tests with both indices -and this is despite a very small number of observations. With the post-2002 WBCCI, the mean ¨Y1=.06 (p<.05, N=19); with the TICPI, the mean ¨Y1=.13 (p<.05, N=28). 17 The trend is illustrated by Figure 1 and Figure 2, which show the results of an ordinary least squares (OLS) regression of ¨Y1 on several dummy variables for the WBCCI and TICPI, respectively. The dummies differ according to the number of years that have passed since a particular event. For example, Turnover (t) equals one if the observation (country-year) included a turnover election, while Turnover (t-1) equals one if the observation is one year after a turnover election (for the same country). The regressions exclude a constant term and include instead a dummy variable for excluded category years. Thus, each coefficient estimate is the mean for a group of observations, and the confidence intervals indicate how those changes compare to zero. At the bottom of each figure, the excluded group ("Other years") is shown to have a mean change that is close to zero. By contrast, Turnover (t-1) years exhibit a mean change that is both statistically positive and larger than the change of any other group -that is, the indices show unusually large increases in the years that immediately follow turnover elections. Source: Author's own calculation and compilation. Figures 1 and 2 show several other patterns. First, mean WBCCI change in Reelection (t-1) years is zero, as H2 anticipates; however, mean TICPI change in Reelection (t-1) years is markedly positive. Second, the coefficient on Turnover (t-2) is positive in both figures, suggesting that many countries exhibit two consecutive years of index gains after turnover. Third, Figure 1 shows evidence of a postsurge decline in corruption perceptions: in Turnover (t-3) years the ¨Y1 is negative and of such magnitude as to erase the improvement that comes in the previous year. With the TICPI, however, there is no clear indication of a postsurge decline -except perhaps in turnover election years, which raises the possibility that the increase in Turnover (t-1) years is not something unusual but instead a more routine regression to the mean. While this paper's theory would predict some mean regression -because worsening corruption perceptions before an election can contribute to turnover -it begs the question of whether the ¨Y1<0 in Turnover (t) years should be emphasized instead of the ¨Y1>0 in Turnover (t-1) years. This question is Mean change in TICPI since previous year answered in the next subsection, where the regression includes a lagged dependent variable. Finally, Figure 2 shows that the TICPI decreases after irregular transfers of power, while Figure 1 shows that the effect of irregular transfers on the WBCCI is mixed. Either result might have been expected. While a coup or unexpected resignation may often heighten political anxieties and thus prompt perceptions downgrades, it can also be taken to signal the end of a troublesome period. A Dynamic Model While Figure 1 and Figure 2 provide some indication of how index changes following turnover elections compare with index changes in other years, the comparison can be improved by adding control variables and accounting for serial correlation. Equation (1) includes a lagged dependent variable as well as control variables (X j ) and is estimated with OLS with panel-corrected standard errors (OLS-PCSE) to account for serial correlation (see Beck and Katz 1995): Coefficients Ƣ 2 and Ƣ 3 estimate how index changes in the years that follow turnover elections and reelections compare with index changes that occur in other years. If mean change in excluded category years is zero, then H1 and H2 predict Ƣ 2 >0 and Ƣ 3 =0, respectively. The model does not provide a clear test of the "decline" part of H1; that is tested with a different model below. It is reasonable to expect a negative coefficient on the lag (Ƣ 1 <0) because of regression to the mean. 18 Figures 1 and 2, however, suggested that index changes following turnover elections are often positive for two consecutive years, which would decrease the chance of Ƣ 1 <0. This is not a concern, but it does mean that the model could be improved by including a Turnover (t-2) dummy to account for how those years differ from other years. 19 If it is common to observe index gains in both the first and second years after a turnover election and if it is otherwise atyp-18 Few countries exhibit a clear long-term trend in either index. The closest instance would be Uruguay's TICPI, which increased in eight of eleven years. 19 As explained, the inclusion of Turnover (t-2) is not to "control" for countries that experienced turnover in back-to-back years. No country had such an experience. ical to observe back-to-back years of a ¨Y1>0, Turnover (t-2) will receive a positive estimate and its inclusion will make the coefficient on the lag more negative. Control variables include Irregular Change i(t-1) , ALBA i(t-1) , and the following: 20 GDP Growth Rate it = the (mean-centered) percent change in real GDP per capita since the previous year. This variable is likely to receive a positive coefficient estimate, indicating that economic growth is correlated with less perceived corruption. Scandal it =1 if a major corruption scandal implicating the executive branch broke during the year; =0 otherwise. This variable is meant to account for scandals that have the potential to significantly alter corruption perceptions. It therefore ignores "minor" scandals or scandals in countries already perceived to be highly corrupt and only accounts for scandals that are particularly egregious or highly unusual for the country in which it occurs. Because there is no simple way to operationalize this variable for all Latin American countries over 15 years, the coding is impressionistic. Four cases are deemed sufficiently important to be coded Scandalit=1: the 2004 corruption allegations that implicated former presidents Rodríguez and Calderón in Costa Rica; the MOP-GATE and Caso Coimas scandals that implicated Chile's government in 2002; the bribery revelations in Peru 2000 that prompted Alberto Fujimori's resignation; and the mensalão in Brazil 2005 -a corruption scandal that was attention-grabbing even by Brazilian standards. 21 Of course, one could make a case to exclude one of these scandals or to include other scandals, but wrestling with various coding schemes for this type of control variable is not worthwhile in this con-20 Other variables that were analyzed include whether the president was elected nonconsecutively, whether there was reported fraud or violence surrounding the election, and whether the country signed or implemented a trade agreement with the United States. None of these variables affected the results. 21 The mensalão, or "big monthly stipend," was furnished to lawmakers so that they would support the government's agenda. text, as there are no strong reasons to suspect that any coding scheme will dramatically alter how the key variables of interest (i.e., Turnover and Reelection) behave in the statistical model. Scandal is expected to receive a negative coefficient estimate. Nationalization it =1 if the government nationalized a sector of the hydrocarbon industry; =0 otherwise. Nationalization=1 for Argentina 2004, Bolivia 2006, Ecuador 2006, and Venezuela 2001. The variable serves as an additional measure of "leftward" policy change, at least among countries that have significant hydrocarbon resources to nationalize. It is included because nationalizations, even when partial, receive considerable attention at home and abroad. If corruption perceptions indices are heavily influenced by risk consultants and other foreign analysts, the variable is likely to receive a negative estimate. Table 2 reports OLS-PCSE estimates of (1). The first two regressions use the post-2002 WBCCI; the second two, the TICPI. 22 The results of the first and third regressions indicate that Turnover (t-1) years exhibit a statistically positive ¨Y1 relative to excluded years (p<.10 in both regressions) and that Reelection (t-1) years do not. Because the model includes a lagged dependent variable, we can conclude that the increase in Turnover (t-1) years is not simply regression to the mean. Regressions two and four add Turnover (t-2) to the model. As expected, the variable receives a positive estimate and makes the coefficient on the lagged dependent variable more negative. Also, the predicted changes in Turnover (t-2) years, after taking into account the increase in Turnover (t-1) years and the coefficient on the lag, are .04 (WBCCI) and .025 (TICPI). This suggests again that most countries experience two years of index gains after turnover. Unlike with turnover elections, significant changes in the corruption indices do not follow hydrocarbon nationalizations. Also, the turnover surge is not significantly lower for those presidents who joined ALBA. Note also that the estimates on GDP Growth Rate and Scandal are always in the anticipated direction, and that the latter is significant with the TICPI. How Common Are "Surges"? The analysis has shown that the average turnover election is followed by one-year increase of roughly .06 WBCCI units and .1 TICPI units. These changes are not large, with each being about one-twentieth of the standard deviation in the global index. A turnover election does not make Nicaragua look like Costa Rica or Argentina like Chile. Still, the change is large enough to move most countries up a spot in the ranking of Latin American states. 23 Of course, some increases are larger than average, while others are smaller (see Table 3). Thus, we might ask how common it is for a turno-ver election to be followed by a "large" increase. If (say) a large increase exceeds .15 WBCCI units or .3 TICPI units, then the answer is 25 percent and 20 percent of the time, respectively. These frequencies are twice what are observed in the dataset, as only 12.5 percent of WBCCI changes and 9.8 percent of TICPI changes are so large. Additionally, the correlation between a dummy variable that is unity for changes that exceed these thresholds and Turnover (t-1) is significant with both indices (WBCCI: p<.07, N=144; TICPI: p<.05, N=215). There is thus a disproportionate chance of observing large index gains in Turnover (t-1) years. At the same time, large gains are not likely to follow reelections. While five Turnover (t-1) observations have a ¨Y1>.15 WBCCI units, that occurs after only one reelection -Bolivia 2009. Similarly, eight elections are followed by TICPI gains of .4 or more, but only two are reelections, and one of those (Chile 2000) could arguably have been coded as a turnover election. 24 Two-Year Index Changes The examination of two-year changes permits use of the pre-2002 WBCCI, allows a test of whether the two-year surge is statistically significant, and allows a reasonable test of the "decline" portion of H1. Because H1 does not predict an exact timing of the postsurge decline -its starting year can vary -it would be overly restrictive to test for it in any particular year. This section studies the size and frequency of index declines during the second two-year period after turnover elections. 25 Table 3 lists the first four years of index changes that followed turnover elections, excluding those that immediately followed or preceded an irregular change. It shows that most countries' scores increased over the first two years (Y12) (though some of the gains were larger after only one year) and that many of the improvements decreased over years three and four (Y34). With regard to turnover elections before 2009, 73 percent (WBCCI) and 67 percent (TICPI) saw a ¨Y12>0. Of those that occurred before 2007 and had a ¨Y12>0, 67 percent (WBCCI) and 40 percent (TIPCI) saw a ¨Y34<0. These figures do not account for other determinants of index change, and it is clear from considerably. In particular, countries that experienced either rapid economic growth during Y34 (second-to-last column) or a marked acceleration in the growth rate from Y12 to Y34 (final column) were less likely to see a ¨Y34<0. Indeed, the Pearson correlation (r) between Y34 index changes and growth rates is .61 when using the WBCCI (p<.01, N=20) and .51 when using the TICPI (p<.02, N=18). 26 Interestingly, the opposite relationship is observed during Y12, when r=-.41 with the WBCCI and r=-.31 with the TICPI. This may occur because worse economic conditions at the time of the turnover election solicit greater relief with the change of administration. Regardless, it is unsurprising that the index-economic growth relationship is more strongly positive during Y34. A poorly performing economy is less likely to damage perceptions of governmental performance when the government is brand new than when the government has been in office for a few years. Anyway, the empirical connection between index changes and growth rates during Y34 implies that the entries that are most informative about postsurge declines are in the middle of Table 3 -that is, they are administrations that did not experience either rapid economic growth or an economic contraction during Y34. Tellingly, these cases were exceedingly likely to exhibit index downgrades during Y34. To provide a statistical assessment of the two-year surge and twoyear decline while controlling for economic conditions and other variables, I use OLS-PCSE and a model similar to (1) but which focuses on two-year index changes, ¨Y2it = index it -index i(t-2) . The regressions compare only a few types of observations: (a) those that are two years after an election (turnover or reelection), excluding those that are one year after either an irregular transfer or another election, and (b) those that are four years after a turnover election, excluding those that are three or fewer years after an irregular transfer or another election. As before, the syntax refers to these the other way around, as Turnover (t-2) =1, Reelection (t-2) =1, or Turnover (t-4) =1. The restriction to only these observations ensures the model is unencumbered by observations that straddle elections or that cover overlapping two-year periods. 27 26 These relationships are much stronger than those with annual change data; in other words, when examined over longer periods of time, corruption indices are more strongly related to economic growth rates. 27 The restriction makes for an unbalanced WBCCI panel. (The TICPI panel was already unbalanced due to "missing" data.) The regressions in Table 4 use the "pairwise" option in Stata's xtpcse, which allows estimation of the interpanel covariance matrix to be based on the years that are common to any two panels, rather than the years that are common to all panels. A dummy variable for each group is included in each regression, and each regression excludes a constant term. Therefore, each coefficient compares a group mean to zero rather than to an excluded category of observations. The model also includes (i) a two-year lag of the dependent variable (¨2y i(t-2) ), (ii) the Two-Year GDP Growth Rate it , which is the mean-centered change in real GDP per capita over the previous two years, (iii) this last variable interacted with Turnover (t-4) , which allows the model to account for differing relationships between index change and GDP growth throughout the electoral cycle, and (iv) Irregular Change (t-2) . The regression results are provided in Table 4. The first WBCCI estimates (regression one) are consistent with H1: there is a positive estimate on Turnover (t-2) and a negative estimate on both Turnover (t-4) and the lagged dependent variable. Both dummies are statistically significant, and a joint test of them with the lagged dependent variable shows all three to be highly significant (p<.001 in the ̙ 2 at the bottom of the table). 28 The estimates suggest that at mean levels of GDP growth, the average turnover election is followed by a .09 unit increase over Y12 and a .06 unit reversal over Y34. The second regression includes Two-Year Scandal it , a dummy that is unity if Scandal (t-1) =1 or Scandal (t-2) =1. The variable is excluded from the first regression because this paper's theory anticipates its collinearity with Turnover (t-4) -that is, if one of the reasons for the decline is the increased frequency of high-level corruption scandals, then the statistical model does not require both Turnover (t-4) and Two-Year Scandal it . The way that the results change in regression three conforms with this line of thinking, as the scandal variable causes the estimate on Turnover (t-4) to move toward zero. Regression three indicates that at mean levels of GDP growth the average TICPI increase over Y12 is .17 units and the decline over Y34 is one-third as large. About two-thirds of the decline is attributed to the Turnover (t-4) coefficient; the remaining third, to the lagged dependent variable. Although neither estimate is statistically significant, a joint test of them plus Turnover (t-2) rejects the null hypothesis. As with the WBCCI, the addition of the scandal variable causes the Turnover (t-4) coefficient to move toward zero (regression four), which suggests anew that the decline is partly due to corruption scandals. In short, all four regressions in Table 4 provide evidence of a surge-and-decline pattern. The data also support H2, as not one regression suggests a significant reelection effect. Lastly, I illustrate the results of regressions that are similar to those in columns one and three of Table 4 except for their inclusion of Reelection(t-4) observations and their exclusion of observations not in four-year terms. 29 Because this model is unencumbered by terms of varying length, it facilitates plots of predicted index changes over whole presidential terms. Holding GDP change at its mean and setting the lagged dependent variable to zero for the first two-year period, Figure 3 provides this plot for the WBCCI. 30 The figure shows that the average two-year surge is roughly one-tenth of the standard deviation in the global index and that it declines by about one-third over Y34. The pattern repeats if the 28 After xtpcse, a test of multiple coefficient estimates is given by a chi-squared statistic. 29 The results are not shown for considerations of space; they are available from the author's website. 30 The figures exclude confidence intervals because post-xtpcse tests of multiple coefficients are provided by a chi-square statistic. president loses his reelection bid and a new government is installed. 31 If the government wins reelection, however, the WBCCI decreases over each of the next two-year periods to the point that the predicted change at the end of the term is close to where it was eight years before. The TICPI pattern is similar (Figure 4), the only differences being that the postsurge decline is less dramatic and that there is some improvement with reelections. Yet, the difference associated with a two-term president is about the same and much more modest than the change that accompanies a president's first two years in office. 31 The second-term surge differs slightly because the lagged change is not zero. Source: Author's own calculation and compilation. Discussion The main conclusion of this study is that to understand perceptions of corruption in Latin American countries one must attend to the presidential election cycle and particularly the changes that follow turnover elections. However, the results and theory have other implications. For example, the temporary boost in perceived corruption control that follows partisan turnover in the executive branch will tend to strengthen public support for democratic institutions (see Canache and Allison 2005;Seligson 2006;Bohn 2012) as well as soften the demand for political reforms to counter corruption. The timing of the latter is important because it occurs when governments typically have the most political capital to spend on a reform effort. 32 32 The ebb and flow of corruption scandals and corruption perceptions may also matter for nonconcurrent elections, with those that occur shortly after turnover Other implications depend on the specific reasons for perceptions change. I have argued that in any given case one or more of several mechanisms are responsible for the turnover surge. One is that turnover will tend to decrease political corruption whenever the outgoing government is unusually corrupt or wherever corruption tends to grow with government tenure. Separately, a public that is sanguine about a change of political leadership can overestimate the degree to which the new administration improves corruption control. The relative lack of highlevel corruption scandals during the early stages of a new government is a third way that turnover can buoy perceptions of corruption control. Although this study does not test these mechanisms, it provides some evidence that high-level corruption scandals in Latin America have rarely occurred during a new government's first couple of years. Furthermore, its quantitative analysis suggests that scandals are at least part of the reason for the postsurge decline in perceived corruption control. Future research could seek to ascertain the relative contributions of scandals and presidential approval ratings to the surge and decline, though it will remain difficult to determine the degree to which high-level corruption varies and influences perceptions over such periods of time. Measuring corruption (rather than perceptions) is only part of the challenge; another is dealing with the issue of corruption affecting the other two explanatory variables (the likelihood of corruption scandals and public opinion about the government) (cf. Zechmeister and Zizumbo-Colunga 2013). A priori, however, the turnover surge should be viewed as more rooted in perception than reality. This is not because there are not good reasons to think that political corruption might often decline (even if only temporarily) after an election has brought a partisan transfer of power. Rather, it is because the perceptions surge occurs precisely when public optimism is high and corruption scandals are few, and because there is already much evidence to suggest that corruption perceptions diverge from corruption realities (e.g., Morris 2008Morris , 2009Olken 2009). For the surge to reflect (rather than merely coincide with) a change in actual corruption, it must be the case that observers perceive that change and that their perceptions are little influenced by expectations or hopes that the new administration will resist and restrain corruption better than its predecessor. To the extent that this does occur, the perceptions surge documents the longevity of transition politics in presidential democracies, and the data would suggest that turnover results in cleaner politics tending to be more favorable to the government than those that occur later. Cf. Shugart (1995). for about two years. However, it is more likely that the importance of the perceptions surge lies in the realm of public opinion and the political consequences thereof. The empirical findings of this paper have still broader relevance that stems from corruption perceptions indices being so widely used. A considerable amount of cross-national research on the causes and consequences of corruption employs these indices, and the results presented here suggest that in such applications it may be necessary to account for the regular fluctuations that appear after turnover elections. While this study cannot say whether turnover will have the same importance for index rankings in models that include parliamentary systems or nondemocracies (though that is a worthwhile avenue for future research), it demonstrates that turnover has an important effect on index values in Latin America. The pattern also matters for organizations that utilize perceptions indices to gauge countries' strides against corruption. This includes the Millennium Challenge Corporation (MCC), which allocates foreign aid according to how countries perform over time in the WBCCI (and other governance indicators). For instance, Honduras received a grant of USD 205 million from the MCC in 2004, not long after the turnover election of November 2001. The WBCCI was not compiled in 2001, but Honduras' WBCCI increased markedly from 2002 to 2003. In fact, Honduras' largest one-year improvement in the WBCCI for the period examined for this study was in 2003. That change may have accompanied real headway against corruption, and the turnover election may have contributed to such a development. However, the ability of turnover to independently improve corruption perceptions should be an important consideration for the MCC and other organizations that use perceptions indices to evaluate corruption control. Postturnover periods may demand special scrutiny to ensure that any index gains relate to significant developments in corruption control and not merely to public optimism about government turnover. It similarly matters how indices are interpreted by the media, not least because it can influence diagnoses and reform agendas. It is also possible for foreign discourse about a country's index rank to influence domestic perceptions of corruption (Brinegar 2009). Of course, this study emphasizes the reverse process (i.e., that perceptions affect the indices), but feedback effects are possible. In any case, a worthwhile question for future research concerns the relative contributions of foreign and domestic perceptions to postturnover index change. Because the indices combine data from both types of audiences, we may suppose that events and trends that affect the indices do so via both audiences. However, the possibility remains that the turnover surge is driven predominantly by one audience or the other. Any such divergence would beg another question about which audience more accurately perceives the amount of corruption in a political system. The answer to that question would help us better understand the degree to which corruption perceptions reflect corruption realities.
2019-05-15T14:31:32.036Z
2015-03-31T00:00:00.000
{ "year": 2015, "sha1": "1076a64d48bf3d85dfbae325de2b274ce181c921", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/1866802x1500700104", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "1076a64d48bf3d85dfbae325de2b274ce181c921", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
73614809
pes2o/s2orc
v3-fos-license
Homology equivalences of manifolds and zero-in-the-spectrum examples Working with group homomorphisms, a construction of manifolds is introduced to preserve homology groups. The construction gives as special cases Qullien's plus construction with handles obtained by Hausmann, the existence of one-sided $h$-cobordism of Guilbault and Tinsley, the existence of homology spheres and higher-dimensional knots proved by Karvaire. We also use it to get counter-examples to the zero-in-the-spectrum conjecture found by Farber-Weinberger, and by Higson-Roe-Schick. Introduction The aim of this note is to propose a general surgery plus construction on manifolds. This is a manifold version of the generalized plus construction for CW complexes found by the author in [21]. As applications, we give a unified approach to the plus construction with handles of Hausmann [8], the (mod L)-one-sided h-cobordism of Guilbault and Tinsley [6], the existence of homology spheres of Kervaire [15] and the existence of higher-dimensional knots of Kervaire [14]. We also use it to get some examples for the zero-in-thespectrum conjecture found by Farber-Weinberger [4] and Higson-Roe-Schick [11]. First, we briefly review these existing works. Let M be an n-dimensional (n ≥ 5) closed manifold with fundamental group π 1 (M) = H. Suppose that Φ : H → G is a surjective group homomorphism of finitely presented groups with the kernel ker Φ a perfect subgroup. Hausmann shows that Quillen's plus construction with respect to ker Φ can by made by adding finitely many two and three handles to M ×1 ⊂ M ×[0, 1] (cf. Section 3 in Hausmann [8] and Definition of ϕ 1 on page 115 in Hausmann [9]). The resulting cobordism (W ; M, M ′ ) has W and M ′ the homotopy type of the Quillen plus construction M + . In other words, the fundamental group π 1 (M ′ ) = π 1 (W ) = G and for any Z[G]-module N, the inclusion map M ֒→ W induces homology isomorphisms H * (M; N) ∼ = H * (W ; N). An n-dimensional homology sphere is a closed manifold M having the homology groups of the n-sphere, i.e. H * (M) ∼ = H * (S n ). Let π be a finitely presented group and n ≥ 5. Kervaire shows that there exists an n-dimensional homology sphere M with π 1 (M) = π if and only if the homology groups satisfy H 1 (π; Z) = H 2 (π; Z) = 0. For an integer n ≥ 1, define an n-knot to be a differential imbedding f : S n → S n+2 and the group of the n-knot f to be π 1 (S n+2 − f (S n )). Let G be a finitely presentable group. The weight w(G) is the smallest integer k such that there exists a set of k elements α 1 , α 2 , . . . , α k ∈ G whose normal closure equals G. Kervaire [14] shows that a finitely presented group G is the group of a n-knot (n ≥ 5) if and only if H 1 (G; Z) = Z, H 2 (G; Z) = 0 and the weight of G is 1. The zero-in-the-spectrum conjecture goes back to Gromov, who asked for a closed, aspherical, connected and oriented Riemannian manifold M whether there always exists some p ≥ 0, such that zero belongs to the spectrum of the Laplace-Beltrami operator ∆ p acting on the square integrable p-forms on the universal coveringM of M. Farber and Weinberger [4] show that the conjecture is not true if the condition that M is aspherical is dropped. More generally, Higson, Roe and Schick [11] show that for a finitely presented group G satisfying H 0 (G; C * r (G)) = H 1 (G; C * r (G)) = H 2 (G; C * r (G)) = 0, there always exists a closed manifold Y of dimension n (n ≥ 6) with π 1 (Y ) = G such that Y is a counterexample to the conjecture if M is not required to be aspherical. In this note, a more general construction is provided to preserve homology groups. For this, we have to introduce the notion of a finitely G-dense ring. is a unital ring R together with a ring homomorphism φ : Z[G] → R such that, when R is regarded as a left Z[G]-module via φ, for any finitely generated right Z[G]-module M, finitely generated free right R-module F and R-module surjection f : This is an analog of G-dense rings defined in Ye [21]. Examples of finitely G-dense rings include the real reduced group C * -algebra C * R (G), the real group von Neumann algebra N R G, the real Banach algebra l 1 R (G), the rings k = Z/p (prime p) and k ⊆ Q a subring of the rationals (with trivial Gactions), the group ring k[G], and so on. Conventions. Let π and G be two groups. Suppose that R is a Z[G]module and BG, Bπ are the classifying spaces. For a group homomorphism α : π → G, we will denote by H * (G, π; R) the relative homology group H * (BG, Bπ; R) with coefficients R. All manifolds are assumed to be connected smooth manifolds, until otherwise stated. Our main result is the following. Theorem 1.2. Assume that G is a finitely presented group and (R, φ) is a finitely G-dense ring. Let X be a connected n-dimensional (n ≥ 5) closed orientable manifold with fundamental group π = π 1 (X). Assume that α : π → G is a group homomorphism of finitely presented groups such that the image α(π) is finitely presented and Suppose either that R is a principal ideal domain or that the relative homology group H 1 (G, π; R) is a stably free R-module. When 2 is not invertible in R, suppose that the manifold M is a spin manifold. Then there exists a closed R-orientable manifold Y with the following properties: (i) Y is obtained from X by attaching 1-handles, 2-handles and 3-handles, such that (ii) π 1 (Y ) ∼ = G and the inclusion map g : X → W, the cobordism between X and Y , induces the same fundamental group homomorphism as α, and (iii) for any integer q ≥ 2 the map g induces an isomorphism of homology groups Theorem 1.2 has the following applications. • When ker α is perfect and R = Z[G] the group ring, this is Quillen's plus construction with handles, which is obtained by Hausmann [8] and [9] (see Corollary 4.1). • When X = S n (n ≥ 5), R = Z and G a superperfect group, Theorem 1.2 recovers the existence of homology spheres, which is obtained by Kervaire [15] (see Corollary 4.4). • When X = S n (n ≥ 6) and R = C * R (G), the theorem yields the results obtained by Farber-Weinberger [4] and Higson-Roe-Schick [11] on the zero-in-the-spectrum conjecture (see Corollary 4.7). For Bousfield's integral localization, Rodríguez and Scevenels [19] show that there is a topological construction that, while leaving the integral homology of a space unchanged, kills the intersection of the transfinite lower central series of its fundamental group. Moreover, this is the maximal subgroup that can be factored out of the fundamental group without changing the integral homology of a space. As another application of Theorem 1.2 with α surjective and R = Z, we obtain a manifold version of Rodríguez and Scevenels' result. Corollary 1.3. Let n ≥ 5 and X be a closed n-dimensional spin manifold with fundamental group π and N a normal subgroup of π. The following are equivalent. (i) There exists a closed manifold Y obtained from X by adding 2-handles and 3-handles with π 1 (Y ) = π/N, such that for any q ≥ 0 there is an isomorphism H q (Y ; Z) ∼ = H q (X; Z). (ii) The group N is normally generated by finitely many elements and is a relatively perfect subgroup of π, i.e. [π, N] = N. The article is organized as follows. In Section 2, we introduce some basic facts about finitely G-dense rings, surgery theory, Poincaré duality with coefficients and one-sided homology cobordism. The main theorem is proved in Section 3 and some applications are given in Section 4. Preliminary results and basic facts 2.1 Finitely G-dense rings Recall the concept of finitely G-dense rings in Definition 1.1 (see also Ye [22], Definition 1). Compared with the definition of G-dense rings, we require all the modules in Definition 1.1 to be finitely generated. It is clear that a Gdense ring is finitely G-dense. The following lemma from Ye [22] gives some typical examples of finitely G-dense rings. Lemma 2.1 ( Ye [22], Lemma 2). Finitely G-dense rings include the real reduced group C * -algebra C * R (G), the real group von Neumann algebra N R G, the real Banach algebra l 1 R (G), the rings k = Z/p (prime p) and k ⊆ Q a subring of the rationals (with trivial G-actions), the group ring k[G]. Similar to Example 2.6 in Ye [21], one can show that the ring of Gaussian integers Z[i] is not finitely G-dense for the trivial group G. Basic facts on surgery The proof of Theorem 1.2 is based on some facts in surgery theory. The following definition and lemmas can be found in Ranicki [18]. Definition 2.2. Given an n-manifold M and an embedding The following lemma gives homotopy relations between the surgery trace and the manifolds. where attaching maps are induced by embedding of handles. 1) If 1 ≤ n ≤ m − 2, then W and M ′ has the same orientation type as M (which means that M ′ is orientable iff M is orientable). 2) If n = m − 1 and M is orientable, so are M ′ and W. 4) If n = 0 and M is nonorientable, then so are W and M ′ . Poincaré duality with coefficients In this subsection, we collect some facts about the Poincaré duality with coefficients. For more details, see Chapter 2 of Wall's book [20]. Let X be a finite CW complex with a universal covering spaceX. Denote by C * (X) the cellular chain complex of X and by C * (X) the chain complex Hom Zπ 1 (X) (C * (X), Zπ 1 (X)). We call a finite CW complex X a simple Poincaré complex if for some positive integer n and a representative cycle ξ ∈ C n (X)⊗ Zπ 1 (X) Z, the cap product induces a chain homotopy equivalence and the Whitehead torsion is vanishing. Similarly, we can define Poincaré pair (Y, X) by a simple homotopy equivalence When X is an n-dimensional closed manifold, X is a simple Poincaré complex of formal dimension n. When X is a compact manifold with boundary ∂X, the pair (X, ∂X) is a simple Poincaré pair (cf. Theorem 2.1 on page 23 in Wall [20]). Lemma 2.5. Let M be an n-dimensional orientable compact manifold with boundary ∂M = X∪Y for closed manifolds X and Y. Then for any integer q ≥ 0 and any Zπ 1 (M)-module R, there is an isomorphism Proof. Since X is a Poincaré complex and (M, X∪Y ) is a Poincaré pair, the lemma can be proved by considering the long exact homology and cohomology sequences for the cofiber sequence of pairs using Poincaré duality for X and the pair (M, X∪Y ). When R is commutative, this is Theorem 3.43 of Hatcher's textbook [7]. The proof of general case is similar. One-sided R-homology cobordism Recall from Guilbault and Tinsley [6] that a one-sided h-cobordism (W ; X, Y ) is a compact cobordism between closed manifolds such that Y ֒→ W is a homotopy equivalence. Motivated by this, we can define one-sided homology cobordism. Definition 2.6. Let (W ; X, Y ) be a compact cobordism between closed manifolds and R a Zπ The following are some easy facts. Proof. By the Whitehead theorem, the homotopy equivalence follows from the homology equivalence with coefficients Z[π 1 (W )] and the isomorphism of fundamental groups. (2) Let (W ; X, Y ) be a one-sided R-homology cobordism. For any integer q ≥ 0, the inclusion map induces an isomorphism Proof. This follows directly from Poincaré duality with coefficients as in the previous subsection. (3) For a one-sided h-cobordism (W ; X, Y ), the inclusion map X ֒→ W is a Quillen's plus construction. Proof. Since Y ֒→ W is a homotopy equivalence, we have that for any integer q ≥ 0, the relative cohomology group H q (W, Y ; Z[π 1 (W )]) = 0. By Poincaré duality, the inclusion map X ֒→ W induces homology isomorphism with coefficients Z[π 1 (W )]. According to 4.3 xi in Berrick [2], the inclusion map is then a Quillen's plus construction. (4) Let R be a principal ideal domain and (W ; X, Y ) a one-sided R-homology cobordism. Then for any integer q ≥ 0, there is an isomorphism Proof. When the inclusion map Y ֒→ W induces an R-homology equivalence, it also induces an R-cohomology equivalence by the universal coefficients theorem. By Poincaré duality, X has the same homology as W, also as Y. 3 Proof of Theorem 1.2 In this section, we will prove Theorem 1.2. First, we need some facts about finitely presented groups. Recall that a normal subgroup N of a group π is called normally finitely generated if there exists a finite set S ⊂ N such that N is generated by elements of the form gsg −1 for s ∈ S and g ∈ π. The following lemma gives an elementary characterization of when a normal subgroup is normally finitely generated. Since it is helpful for our later argument, we present a short proof here. Lemma 3.1. Let π be a finitely presented group and N a normal subgroup. Then N is normally finitely generated if and only if π/N is finitely presented. Proof. The necessity of the condition is obvious. Conversely, choose a presentation of π/N with finitely many generators and finitely many relations. Let F n be the free group with n generators and f : F n → π a surjection, with normally finitely generated kernel K. We can also assume that that the generators of F n are mapped surjectively to the generators of π/N by the composition of f with the quotient map π → π/N. Here we use the fact that the condition that a group is finitely presented does not depend on the choice of a generator system (cf. Prop. 1.3 in Ohshika [17]). Since π/N is finitely presented, f −1 (N) is normally finitely generated. Then N = f (f −1 (N)) is normally finitely generated. In order to prove Theorem 1.2, we use the following lemma, which is a more general version of Hopf's exact sequence. Lemma 3.2 (Lemma 2.2 in Higson-Roe-Schick [11]). Let G be a group and V be a left Z[G]-module. For any CW complex X with fundamental group G and universal covering spaceX, there is an exact sequence Proof of Theorem 1.2 We construct a manifold Y 1 whose fundamental group π 1 (Y ) = G as follows. Fix a finite presentation x 1 , x 2 , . . . , x k |y 1 , y 2 , . . . , y l of α(π). Extend the presentation of α(π) by generators and relations to yield a presentation x 1 , x 2 , . . . , x k , g 1 , g 2 , . . . , g u |y 1 , y 2 , . . . , y l , r 1 , r 2 , . . . , r v of G by adding some generators and relations. For adding the generators g 1 , g 2 , . . . , g u , let S 0 i be a copy of the 0-sphere S 0 and be an embedding with disjoint image. Add 1-handles along f 1 to X. The resulting manifold is X 1 and denote by W 1 the surgery trace. We see that the manifold X 1 is actually the connected sum X♯ u i=1 S 1 × S n−1 and can have the same orientation type as X. Denote by e j i a copy of j-cells indexed by i. By Lemma 2.3, there are homotopy equivalences According to the construction, the fundamental group of X 1 is the free product of π = π 1 (X) and the free group of u generators. By Lemma 3.1, the kernel ker α is normally generated by finitely many elements z 1 , z 2 , . . . , z p . Denote by S the finite set {z 1 , z 2 , . . . , z p , r 1 , r 2 , . . . , r v }. Choose as usual a contractible open set U in X 1 as "base point". According to Whitney's theorem, any element in π 1 (X 1 ) is represented by an embedded 1-sphere. Since the manifold X 1 is orientable, the normal bundle of any embedded 1-sphere is trivial. For the elements in S, let S 1 λ be a copy of the 1-sphere S 1 and let f 2 : ∐ λ∈S S 1 λ × D n−1 → X 1 be disjoint embeddings representing the corresponding elements in π 1 (X 1 ). Do surgery along f 2 by attaching 2-handles. The resulting manifold is X 2 , and denote by W 2 the surgery trace. By Lemma 2.3 once again, there are homotopy equivalences Since n ≥ 4, the fundamental group of X 2 is G. Let W ′ be the manifold obtained by gluing the two traces W 1 and W 2 together along X 1 . There are homotopy equivalences This implies the fundamental group of W ′ is also G, since n > 3. We consider the homology groups of the pair (W ′ , X). LetX andW ′ be the universal covering spaces of X and W ′ . By Lemma 3.2, there is a commutative diagram where the middle horizontal chain is the long exact sequence of homology groups for the pair (W ′ , X) and the two vertical lines are the Hopf exact sequences as in Lemma 3.2. Notice that is injective by assumption. This implies j 1 : H 2 (W ′ ; R) → H 2 (W ′ , X; R) is surjective in the above diagram. Note that the map α * : H 2 (π; R) → H 2 (G; R) is surjective by assumption. By a diagram chase (for more details, see the proof of Theorem 1.1 in Ye [21]), there is a surjection AsW ′ is simply connected, the homology group H 2 (W ′ ) is isomorphic to π 2 (W ′ ) ∼ = π 2 (X 2 ). Notice that the homology group H 2 (W ′ , X; R) can be taken to be a finitely generated free R-module as in the proof of Theorem 1 in Ye [21]. Since the ring R is a finitely G-dense ring in the sense of Definition 1.1, we can find a finite set S ′ of elements in π 2 (X 2 ) such that the image j 1 • j 4 (S ′ ) forms an R-basis for H 2 (W ′ , X; R). Then there are maps b λ : S 2 λ → X 2 with λ ∈ S ′ such that for all q ≥ 2, the composition of maps is an isomorphism. We construct the manifolds Y and W as follows. Notice that an embedded 2-sphere in a k-dimensional (k ≥ 5) orientable manifold M has trivial normal bundle if and only if it represents 0 in π 1 (SO(k)) = Z/2 through the classifying map M → BSO(k) (cf. page 45 of Milnor [16]). When 2 is invertible in the ring R, we can always choose the 2-spheres in S ′ to have trivial normal bundles. When 2 is not invertible in R, the manifold X is a spin manifold by assumption. This implies any embedded 2-sphere has a trivial normal bundle. Since n ≥ 5, we can choose a map as disjoint embedding, whose components represent the elements b λ . Do surgery along f 3 by attaching 3-handles. Let Y denote the resulting manifold and W 3 denote the surgery trace. Suppose that W is the manifold obtained by gluing W ′ and W 2 along X 2 , which is a cobordism between X and Y. By Lemma 2.3, there are homotopy equivalences ∪ λ∈S ′ e n−2 λ . By the van Kampen theorem, the fundamental group of Y is still G, since n > 4. Denoting by H * (−) the homology groups H * (−; R), we have the following commutative diagram: By a five lemma argument and the assumption that α * : H 1 (π; R) → H 1 (G; R) is injective, for any integer q ≥ 2, the relative homology group H q (W, X; R) = 0. This shows for any integer q ≥ 2, the homology groups H q (X; R) ∼ = H q (W ; R) and proves the isomorphism in (1). Remark 3.3. From the proof, we can see that for some special group homomorphism α and coefficients R, the orientability or spin-ness of X in Theorem 1.2 can be dropped. For example, when α is surjective and ker(α) < [π, π], we do not need X to orientable. When ker(α) is perfect (or weakly L-perfect for some normal group), the spin-ness of X can be dropped (cf. the proof of Theorem 4.1 and Theorem 5.2 in Guilbault and Tinsley [6]). Applications In this section, we give several applications of Theorem 1.2. When Z[π/P ] Z[π/P ] P ab ∼ = Z Z P ab = 0, we can see is surjective and H 1 (π; Z[π/P ]) → H 1 (π/P ; Z[π/P ]) is an isomorphism. Therefore, the conditions of group homomorphism α are satisfied. By Theorem 1.2, there exists a closed spin manifold Y and cobordism (W ; M, Y ) such that for any integer q ≥ 0, there is an isomorphism This implies the inclusion map g : X → W is the Quillen plus construction (cf. 4.3 xi in Berrick [2]). Therefore, for all integers q, the relative cohomology groups H q (W, X; Z[G]) = 0. According to the Poincaré duality in Lemma 2.5, for each integer q ≥ 0, the relative homology group H q (W, Y ; Z[G]) = 0 and there is an isomorphism as well. This means the universal covering spaces of Y and W are homology equivalent and therefore also homotopy equivalent. Since Y and W have the same fundamental group, this implies the inclusion map Y → W is a homotopy equivalence. This finishes the proof. Proof. The proof is similar to that of Corollary 4.1. When ker(α ′ ) is L ′perfect, we have that Z[G/L] Z[G] ker(α) ab = 0. According to the 5-term exact sequence for group homology (cf. (8.2) in Hilton and Stammbach [12], page 202) we can see that H 2 (α) is surjective and H 1 (α) is isomorphic. By Theorem 1.2 with R = Z[G/L] and the remark followed, there exists a cobordism (W ; A, B) such that π 1 (B) = π 1 (W ) = G and the inclusion B ֒→ W induces homology equivalence with coefficients R. Considering the covering spaces of B and W with deck transformation group G/L, we can see that the inclusion B ֒→ W also induces a cohomology equivalence with coefficients R. By Poincaré duality in Lemma 2.5, the inclusion A ֒→ W induces homology equivalence with coefficients R. This finishes the proof. Surgery preserving integral homology groups In this subsection, we study the case when the integral homology groups of a manifold are preserved by doing surgery. Corollary 1.3 is a special case of Theorem 1.2 when R = Z. where the left vertical map is an isomorphism. This shows that the right vertical map is an epimorphism. According to the same exact sequence (3) above, we have N = [π, N], which means N is relative perfect. Since π 1 (X)/N = π 1 (Y ) is finitely presented, N is normally finitely generated by Lemma 3.1. Corollary 1.3 is a manifold version of a result obtained by Rodríguez and Scevenels [19] for CW complexes. The fundamental groups of homology manifolds In this subsection, we study the fundamental groups of manifolds with the same homology type as a 2-connected manifold. Conversely, suppose that G is a finitely presented group with H 2 (G; R) = H 1 (G; R) = 0. Let X = M, π = 1, the trivial group and f : π → G the obvious group homomorphism. Note that the 2-connected manifold X is always a spin manifold and R is a principal ideal domain. According to Theorem 1.2, we get a manifold Y with π 1 (Y ) = G such that for any integer q ≥ 0, there is an isomorphism H q (W ; R) ∼ = H q (M; R) for the cobordism W. According to the universal coefficient theorem, for all integers q ≥ 0 the relative cohomology groups H q (W, M; R) = 0. Therefore, for any integer q ≥ 0, there is an isomorphism H q (Y ; R) ∼ = H q (M; R) by Theorem 1.2 (ii). Recall that an n-dimensional R-homology sphere is an n-dimensional manifold Y such that H i (Y ; R) = H i (S n ; R) for any integer i ≥ 0. The first part of the following result proved by Kervaire in [15] is a special case of Theorem 4.3 when M = S n and R = Z. Hausmann and Weinberger [10] constructed a superperfect group G for which any 4-manifold Y with π 1 (Y ) = G satisfies χ(Y ) > 2. As a consequence it follows that Theorem 4.3 and Corollary 4.4 do not extend to dimension four. The fundamental groups of higher-dimensional knots In this subsection, we study the fundamental groups of higher-dimensional knots. The following result proved by Kervaire [14] is a corollary of Theorem 1.2. (Recall the definition of weight w(G) from the Introduction). Corollary 4.5. Given an integer n ≥ 3, a finitely presentable group G is isomorphic to π 1 (S n+2 − f (S n )) for some differential embedding f : S n → S n+2 if and only if the first homology group H 1 (G; Z) = Z, the weight of G is 1 and the second homology group H 2 (G; Z) = 0. Proof. Suppose for a finitely presentable group G, we have H 1 (G; Z) = Z, H 2 (G; Z) = 0 and the weight w(G) = 1. Let X = S n+2 , π = 1, the trivial group, and α : π → G the trivial group homomorphism. Notice that α induces an injection of the first homology groups and surjection of the second homology groups. By Theorem 1.2 with R = Z, we get a closed manifold Y with π 1 (Y ) = G, obtained from S n+2 by attaching 1 -handles, 2-handles and 3-handles. Suppose that W is the surgery trace. By Poincaré duality, for any integer q ≤ n + 1 the relative cohomology group H q (W, Y ) = 0. According to the universal coefficient theorem, we have that for each integer 2 ≤ i ≤ n + 1 there is an isomorphism Let γ ∈ G be an element such that G is normally generated by γ and ϕ : S 1 → Y be a differential embedding representing γ. Extend ϕ to be an embedding ϕ ′ : S 1 × D n+1 → Y. Do surgery to Y along ϕ ′ to get a manifold M. It can be easily seen that M is simply connected and for each integer 1 ≤ i ≤ n the homology group H i (M) = 0. Therefore, M is a (n + 2)-sphere by the solution of higher-dimensional Poincaré conjecture (note that all manifolds are assumed to be smooth). Let φ : D 2 × S n → M be the embedding and choose f = φ(0, −) to be the embedding S n → M. It can be directly checked that This finishes the "if" part. Conversely, suppose f : S n → S n+2 is a differential embedding. According to Alexander duality, we have H 2 (S n+2 −f (S n )) = 0 and H 1 (S n+2 −f (S n )) = H 1 (G; Z) = 0. By Hopf's theorem in Lemma 3.2 (with the coefficient V = Z), the second homology group H 2 (G; Z) = 0. Let α : S 1 → S n+2 − f (S n ) be an embedding such that α(S 1 ) bounds a small 2-disc in S n+2 that intersects f (S n ) transversally at exactly one point. Then the group π 1 (S n+2 − f (S n )) is normally generated by the element represented by α. For more details, see the proof of Lemma 2 in Kervaire [14]. This proves the weight w(G) = 1 and finishes the proof. Corollary 4.5 is Theorem 1 in Kervaire [14]. Similarly, we can show that Theorem 3 in [14] concerning the fundamental groups of links is also a corollary of Theorem 1.2. That is for an integer n ≥ 3, a finitely presentable group G is isomorphic to π 1 (S n+2 − L k ) for some k disjointly embedded n-spheres L k if and only if H 1 (G; Z) = Z k , w(G) = k and H 2 (G; Z) = 0 (cf. Theorem 3 in [14]). The proof is of the same pattern as that of Corollary 4.5 and will be left to the reader. Zero-in-the-spectrum conjecture In the notation of the Introduction, zero not belonging to the spectrum of ∆ = ∆ * can also be expressed as the vanishing of H * (M; C * r (π 1 (M))). The following is a version of the zero-in-the-spectrum conjecture using homology. For more details, we refer the reader to the book of Lück [13]. Conjecture 4.6. Let M be a closed, connected, oriented and aspherical manifold with fundamental group π. Then for some i ≥ 0, H i (X; C * r (π)) = 0. If the condition that X is aspherical is dropped, the following corollary, which is a special case of Theorem 1.2 when R = C * R (G) and π = 1, shows the above conjecture is not true. This result is a generalization of the results obtained by Farber-Weinberger [4] and Higson-Roe-Schick [11]. Recall that the real C * -algebra C * R (G) is a finitely G-dense ring. For every integer n ≥ 6 there is a closed manifold M of dimension n such that π 1 (Y ) = G and for each integer n ≥ 0, the homology group H n (Y ; C * r (G)) = 0. Proof. According to Proposition 4.8 in Ye [21], the vanishing of lower degree homology groups with coefficients C * r (G) is the same as that with coefficients C * R (G). Then we have H 0 (G; C * R (G)) = H 1 (G; C * R (G)) = H 2 (G; C * R (G)) = 0.
2013-05-04T11:11:27.000Z
2013-01-26T00:00:00.000
{ "year": 2013, "sha1": "2c487ab925b5f82b4eb5fb37531242a92fcee1c1", "oa_license": null, "oa_url": "http://msp.org/agt/2013/13-5/agt-v13-n5-p12-s.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "2c487ab925b5f82b4eb5fb37531242a92fcee1c1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
42322826
pes2o/s2orc
v3-fos-license
Effect of early embryonic deletion of huntingtin from pyramidal neurons on the development and long-term survival of neurons in cerebral cortex and striatum We evaluated the impact of early embryonic deletion of huntingtin (htt) from pyramidal neurons on cortical development, cortical neuron survival and motor behavior, using a cre-loxP strategy to inactivate the mouse htt gene (Hdh) in emx1-expressing cell lineages. Western blot confirmed substantial htt reduction in cerebral cortex of these Emx-httKO mice, with residual cortical htt in all likelihood restricted to cortical interneurons of the subpallial lineage and/or vascular endothelial cells. Despite the loss of htt early in development, cortical lamination was normal, as revealed by layer-specific markers. Cortical volume and neuron abundance were, however, significantly less than normal, and cortical neurons showed reduced brain-derived neurotrophic factor (BDNF) expression and reduced activation of BDNF signaling pathways. Nonetheless, cortical volume and neuron abundance did not show progressive age-related decline in Emx-httKO mice out to 24 months. Although striatal neurochemistry was normal, reductions in striatal volume and neuron abundance were seen in Emx-httKO mice, which were again not progressive. Weight maintenance was normal in Emx-httKO mice, but a slight rotarod deficit and persistent hyperactivity were observed throughout the lifespan. Our results show that embryonic deletion of htt from developing pallium does not substantially alter migration of cortical neurons to their correct laminar destinations, but does yield reduced cortical and striatal size and neuron numbers. The Emx-httKO mice were persistently hyperactive, possibly due to defects in corticostriatal development. Importantly, deletion of htt from cortical pyramidal neurons did not yield age-related progressive cortical or striatal pathology. substantial htt reduction in cerebral cortex of these Emx-htt KO mice, with residual cortical htt in all likelihood restricted to cortical interneurons of the subpallial lineage and/or vascular endothelial cells. Despite the loss of htt early in development, cortical lamination was normal, as revealed by layer-specific markers. Cortical volume and neuron abundance were, however, significantly less than normal, and cortical neurons showed reduced brain-derived neurotrophic factor (BDNF) expression and reduced activation of BDNF signaling pathways. Nonetheless, cortical volume and neuron abundance did not show progressive age-related decline in Emx-htt KO mice out to 24 months. Although striatal neurochemistry was normal, reductions in striatal volume and neuron abundance were seen in Emx-htt KO mice, which were again not progressive. Weight maintenance was normal in Emx-htt KO mice, but a slight rotarod deficit and persistent hyperactivity were observed throughout the lifespan. Our results show that embryonic deletion of htt from developing pallium does not substantially alter migration of cortical neurons to their correct laminar destinations, but does yield reduced cortical and striatal size and neuron numbers. The Emx-htt KO mice were persistently hyperactive, possibly due to defects in corticostriatal development. Importantly, deletion of htt from cortical pyramidal neurons did not yield age-related progressive cortical or striatal pathology. Introduction An expanded CAG repeat in the ubiquitously expressed Huntingtin gene (termed HD in humans) is causal for the neurodegeneration in the striatum and cortex in Huntington's disease (HD). Among its functions, the protein derived from the normal HD allele (i.e. huntingtin, or Htt in humans) interacts with microtubules, the dynein/dynactin complex and kinesin to regulate the microtubule-dependent transport of proteins and organelles in neurons (Caviston et al., 2007;Colin et al., 2008;Gauthier et al., 2004;McGuire et al., 2006;Saudou and Humbert, 2016). Huntingtin also appears to play a role in dividing cells through its presence in spindle microtubules of the centrosome. Accordingly, huntingtin knockdown has been found to reduce cell division in vitro (Godin et al., 2010). Consistent with an important role of huntingtin in cell division in at least some brain regions, we have found that Hdh −/− embryonic stem cells injected into blastocysts readily survive in brainstem, but are greatly underrepresented in cortical and striatal areas, the major sites of neuron loss in HD (Reiner et al., 2001). Similarly, studies in zebrafish have shown that huntingtin plays a greater role in forebrain than midbrain and hindbrain development (Henshall et al., 2009). We have further found that at embryonic day 12.5, Hdh −/− neuroblasts are as abundant in cortex, striatum, and thalamus as they are in brainstem in mice with blastocyst injection of Hdh −/− embryonic stem cells (Reiner et al., 2003). This suggests that the underrepresentation of Hdh −/− neurons in cortex and striatum in our blastocyst injection chimeras may reflect, at least in part, death of Hdh −/− cells rather than or as well as deficient cell division. Moreover, Conforti et al. (2013) have found that Hdh −/− stem cells tend to adopt a glial rather than a neuronal fate. In addition to possible roles in neuron division, survival and fate choice during development, huntingtin may also play an important role in cortical neuron migration to final adult laminar position during development (Tong et al., 2011). To further address the roles of huntingtin in neurogenesis, neuronal migration and neuronal survival during development of cerebral cortex, we evaluated the impact of early embryonic deletion of Hdh from dorsal telencephalic neuronal and glial progenitors using a cre-loxP strategy to inactivate the mouse Hdh gene in emx1-expressing cell lineages (Emx-htt KO mice). Since emx1 is expressed in the pallial proliferative zone beginning at about E8 (Simeone et al., 1992;Yoshida et al., 1997), Emx-htt KO mice sustain Hdh deletion from cortical pyramidal neuron and glial precursors and their progeny beginning as early as E8 and demonstrably by E10.5 (Gorski et al., 2002). Although the forebrain huntingtin deletion in Emx-htt KO mice is primarily limited to cerebral cortex, we also examined striatum because of the impact of the cortex on striatum via the massive corticostriatal projection system (Reiner et al., 2010), and because a prior study by others showed early exuberance in corticostriatal connections in Emx-htt KO mice (McKinstry et al., 2014). Moreover, because huntingtin is believed to be important for forebrain neuron survival (Reiner et al., 2001(Reiner et al., , 2003, and because therapeutics for Huntington's disease designed to deplete huntingtin expression are under development (Crook and Housman, 2013), we also examined the impact of cortical deletion of huntingtin on cortical and striatal neuron abundance up to nearly 2 years of age. We found that embryonic deletion of huntingtin from developing pallium impaired cortical neurogenesis and/or neuronal survival during development, but had no notable effect on migration of cortical neurons to their correct laminar destinations. The resulting deficiency in cortical and striatal neurons beginning early in life was accompanied by a hyper-activity phenotype, but subsequent progressive age-related loss of cortical or striatal neurons was not observed. Thus, an early embryonic deletion of Hdh from cortical pyramidal neurons does not adversely affect the long-term survival of cortical or striatal neurons in adulthood. Animals A total of 22 control mice and 16 Emx-htt KO mice were used in the present study. Of these, five control (2 males, 3 females) and 5 Emx-htt KO mice (2 males, 3 females) were used in in Western blot studies. The remaining mice were used in behavioral and histological studies. Fifteen control mice (11 males, 4 females) and 9 Emx-htt KO mice (6 males, 3 females) were studied behaviorally and by immunolabeling, and 2 control males and 2 Emx-htt KO males were studied behaviorally and by in situ hybridization histochemistry. Mice ranged in age from 2 months to 23.6 months, with a mean age of 11.6 months for control mice and a mean age of 12.2 months for Emx-htt KO mice. No noteworthy male-female differences were seen in the results (other than greater weight in males), and so male-female data are combined. Control mice were littermates of the Emx-htt KO mice, and included Hdh +/+ mice, Hdh flox/+ mice, and HdH +/− mice. As previously reported, mice with one floxed Hdh allele and one wild-type (WT) allele as well as hemizygous Hdh mice are indistinguishable from Hdh +/+ WT mice Duyao et al., 1995;Nasir et al., 1995;Zeitlin et al., 1995), as we also observed here. We refer to all of these different mice expressing WT huntingtin as control mice. All experiments were undertaken in accordance with the National Institutes of Health Guide for Care and Use of Laboratory Animals, Society for Neuroscience Guidelines, and University of Tennessee Health Science Center Guidelines, and had institutional approval. Western blot analysis Total protein extract was obtained from cortex and striatum of three 5-month old and two 12-month old Hdh flox/− mice and three 5-month old and two 12-month old Emx-htt KO mice by tissue homogenization in RIPA buffer (150 mM NaCl, 50 mM Tris pH 8.0, 1% NP-40, 0.1% SDS, 0.5% sodium deoxycholate) containing proteinase inhibitor cocktail (Roche) and phosphatase inhibitor cocktail (Roche). Insoluble debris was discarded after centrifugation, and protein concentration was determined by the Bradford assay (BIO-RAD, Hercules, CA). Approximately 30 μg of protein was separated by SDS-PAGE (8% or 12%) and transferred to nitrocellulose membranes. The membranes were blocked in 5% non-fat milk in PBS for 1 h at room temperature and incubated overnight at 4 °C with mouse monoclonal antibody against huntingtin (Millipore MAB 2166, 1:3000), rabbit polyclonal against BDNF (Alomone labs ANT-010, 1:1000), rabbit monoclonal against Erk1/2 (Cell Signaling mAb 4695, 1:1000), rabbit monoclonal against phospho-Erk1/2 (Cell Signaling mAb 4370, 1:2000), rabbit polyclonal against phospho-Akt (Cell Signaling Ab 9271, 1:1000), or mouse monoclonal against beta-tubulin (Chemicon, MAB 3408, 1:5000). The specificity of these antibodies for their target antigens has been demonstrated by the manufacturer by Western blot and in published studies by others (Harrington et al., 2012;Milman and Woulfe, 2013;Czech et al., 2014;Duarte-Neves et al., 2015;Gu et al., 2017;Patel et al., 2017). The membranes were then washed in PBS containing 0.2% Tween, and incubated with anti-mouse or anti-rabbit secondary antibodies at room temperature for 1 h. The membranes were subsequently washed in PBS containing 0.2% Tween, and protein bands were visualized by chemiluminescence (Pierce, Rockford, IL) and autoradiography. For quantification, membranes were scanned and band intensities were analyzed using ImageJ software (http://rsb.info.nih.gov/ij/). Final relative values were determined using the ratio of net band to net loading control. Immunohistochemistry Fifteen control mice (11 males, 4 females) and 9 Emx-htt KO mice (6 males, 3 females) that had been studied behaviorally were subsequently used in immunolabeling studies. Under avertin anesthesia (0.2 ml/g body weight), these mice were perfused transcardially with 6% dextran in 0.1 M sodium phosphate buffer at pH 7.4 (PB), followed by 4% paraformaldehyde, 0.1 M lysine-0.1 M sodium periodate in 0.1 M PB. The brains were removed, stored overnight at 4 °C in 20% sucrose/10% glycerol, and sectioned frozen in the transverse plane at 35 μm on a sliding microtome. A one in six series of brain sections from each mouse was mounted as sectioned, and subsequently stained for cresyl violet. Immunohistochemical single-labeling using peroxidase-antiperoxidase (PAP) procedures described previously Reiner et al., 2012a) was employed to visualize a variety of neurochemical features in mutant and control brains. To study the laminar organization of cerebral cortex, we used immunolabeling with a mouse monoclonal antibody (Sigma-Aldrich, C89848) to detect the lightly labeled calbindinergic neurons defining layers 2-3 (Van Brederode et al., 1991;Kondo et al., 1997;Fauser et al., 2013), a rabbit polyclonal antibody (Sigma/Aldrich, V2514) to detect VGLUT2+ fibers in layer 4 , the SMI-32 mouse monoclonal antibody (Covance, SMI-32R) and a rat monoclonal antibody against Ctip2 (Abcam, AB18465) to define neurons in layer 5 (Özdinler et al., 2011;Fauser et al., 2013), and a rabbit polyclonal antibody against FoxP2 (Abcam, AB16046) to define neurons in layer 6 (Özdinler et al., 2011). The specificity of these antibodies for their target antigens has been demonstrated by the manufacturer by Western blot and in published studies (Stillman et al., 2009;Hirano et al., 2011;Hashimoto et al., 2012;Huang et al., 2012;Lei et al., 2013). To detect neuropathology and/or neuron deficiency, immunolabeling was performed using a mouse monoclonal anti-GFAP or using a mouse monoclonal anti-NeuN, respectively. The specificity and efficacy of these antibodies has been shown previously (Mullen et al., 1992;Wolf et al., 1996;Bloch et al., 2011;Hu et al., 2011). Immunolabeling for Substance P (SP) and for the D1 dopamine receptor was used to study the SP+/D1+ direct pathway striatal neurons and their projections to the globus pallidus internus (GPi) and substantia nigra (SN), while immunolabeling for enkephalin (ENK) was used to study the ENK+ indirect pathway neuron striatal projection system to the globus pallidus externus (GPe). Immunolabeling for DARPP32 was used to evaluate striatal projection neurons and their projections as a group. The anti-SP was a rabbit polyclonal antibody (ImmunoStar, AB1566) whose specificity has been documented previously (Russo et al., 2013;Zahm et al., 2013). The anti-D1 antibody was a rat monoclonal directed against the 97-amino acid C-terminal fragment of human D1 (Sigma-Aldrich, D187), whose specificity has been demonstrated previously (Levey et al., 1993;Hersch et al., 1995). The anti-ENK used was a rabbit polyclonal antibody against leucine-enkephalin (ImmunoStar, 20066) whose specificity has also been shown previously (Reiner, 1987;Reiner et al., 2007;Tripathi et al., 2010). Immunolabeling for DARPP32 was carried out using an anti-DARPP32 (generously provided by P. Greengard and H. Hemmings), whose specificity has been previously documented (Ouimet et al., 1984). To characterize the regional specificity of cortical huntingtin deletion, we used immunolabeling for huntingtin using the D7F7 antibody from Cell Signaling (#5656). This antibody is selective for huntingtin and detects labeling for huntingtin in dendrites, terminals, and perikarya (Clemens et al., 2015), but the latter is often obscured by the density of neuropil labeling. We used D7F7 to assess the effect of the cortical huntingtin deletion immunohistochemically because anti-huntingtin antibodies that label perikarya tend to have substantial background labeling, which in this case would hinder interpretations. Finally, huntingtin is reportedly important for BDNF expression levels, which is highly expressed in cortical pyramidal neurons (Zuccato and Cattaneo, 2009;Reiner et al., 2012b). Accordingly, we used immunolabeling to characterize BDNF in cortical pyramidal neurons, using the N20 antiserum from Santa Cruz (Cat # sc546), whose specificity is demonstrated in Flores-Otero and Davis (2011). In situ hybridization Histochemistry (ISHH) Two control males and two Emx-htt KO males that had been studied behaviorally were subsequently used for in situ hybridization histochemistry at 18.2 months of age. Fresh-frozen coronal sections were processed for preproenkephalin (PPE, the enkephalin precursor), preprotachykinin (PPT, the substance P precursor), the D1 dopamine receptor, the D2 dopamine receptor, and brain-derived neurotrophic factor (BDNF) mRNA detection by ISSH, using previously described methods (Sun et al., 2002;Wang et al., 2006;Dragatsis et al., 2009;Reiner et al., 2012aReiner et al., , 2012b. ISHH was performed on 20 μm thick fresh frozen cryostat sections through the cortex and striatum anterior to the anterior commissure. The sections were collected onto pre-cleaned Superfrost ® /Plus microscope slides, dried on a slide warmer, and stored at −80 °C until used for ISHH. To process sections for ISHH, the slides were removed from −80 °C, quickly thawed and dried using a hair dryer. After fixation with 2% paraformaldehyde in saline sodium citrate (2× SSC) for 5 min, the sections were acetylated with 0.25% acetic anhydride/0.1 M triethanolamine hydrochloride (pH 8.0) for 10 min, dehydrated through a graded ethanol series, and air-dried. Digoxigenin-UTP labeled cRNA probes (i.e. riboprobes) were transcribed from plasmids with PPE, PPT, D1, D2 or BDNF cDNA inserts, generated by us using RT-PCR. Table 1 shows details on these probes and their target mRNA sequences. The BDNF riboprobe was directed against the protein-coding region of BDNF and part of the adjacent 3-prime untranslated sequence. Note that all BDNF transcripts share this sequence, found within exon IX of the BDNF gene, and thus our probe detected all BDNF transcripts (Aid et al., 2007). The sections were incubated with digoxigenin-labeled probe in hybridization buffer containing 50% formamide, 4× SSC, 1× Denhardt's solution, 200 μg/ml denatured salmon sperm DNA, 250 μg/ml yeast tRNA, 10% dextran sulfate, and 20 mM dithiothreitol at 63 °C overnight. After hybridization, the slices were washed at 58 °C consecutively in 4× SSC, 50% formamide with 4× SSC, 50% formamide with 2× SSC, and then 2× SSC, followed by treatment with RNase A (20 μg/ml) for 30 min at 37 °C. Finally, sections were washed at 55 °C in 1× SSC, 0.5× SSC, 0.25× SSC, dehydrated through a graded ethanol series, and air-dried. Digoxigenin labeling was detected using anti-digoxigenin Fab fragments conjugated to alkaline phosphatase (AP), as visualized with nitroblue tetrazolium histochemistry (Roche, Indianapolis, IN). Sections were coverslipped with a 1% gelatin-based aqueous solution. The abundance of striatal neurons labeled for PPT, PPE D1 or D2 was analyzed using ImageJ. Motor behavior Seventeen control mice (13 males, 4 females) and 11 Emx-htt KO mice (8 males, 3 females) were studied behaviorally using accelerating rotarod and open field analysis, in some cases at different points during the lifespan of individual mice. Rotarod analysis was carried out on Emx-htt KO mice and control mice, using a San Diego instruments (San Diego, CA) rodent rotarod. For the rotarod task, RPM increased from 0 to 30 over a four-minute period, and 30 RPM was then maintained for another 2 min. The mice typically received a total of six rotarod sessions, separated by 3-5 min. Time to fall was the measure of rotarod performance. We also conducted automated 30-minute assessment of open field behavior at the same time points as for rotarod, using a Noldus EthoVision video tracking system to record and digitize the mouse movements (Noldus Information Technology, The Netherlands), and the SEE software of Drai and Golani (2001) to analyze the mouse motor behavior (Reiner et al., 2012a). The SEE software uses algorithms to dichotomous mouse movements into lingering episodes and progression segments, and allows rapid characterization of endpoints related to locomotion, and it is robust in characterizing behavioral differences among mouse strains (Drai et al., 2000;Drai and Golani, 2001;Kafkafi et al., 2001Kafkafi et al., , 2003Lipkind et al., 2004;Reiner et al., 2012a). Each animal was brought from its housing room, introduced into the arena and returned after the end of the 30 min session. The arena was a 200 cm diameter circular area with a non-porous gray floor and a 50 cm high gray wall. The gray floors and wall provided a high-contrast background, enabling video tracking of the mice. The arena was illuminated with two ceiling-mounted 40 W neon bulbs. Stereology Stereological neuron counts were carried out blindly for nine Emx-htt KO mice (6 males, 3 females) and fifteen control mice (11 males, 4 females) that had been studied behaviorally. A one-in-twelve series of coronal sections immunolabeled for NeuN from the rostral end of striatum to the level of the anterior commissure was used for striatal and cortical neuron counts in each mouse. These same landmarks were used to define the limits of cerebral cortex and striatum for volumetric analysis. Unbiased stereological counts of striatal and cortical neurons were obtained using a Neurolucida Stereo Investigator system (Micro Brightfield, Colchester, VT Guley et al., 2016)). The dissector counting method was used, in which neurons were counted in counting frames assigned by the software throughout the areas pre-defined as striatum and cortex. Counts for the two sides of the brain were averaged for each case. In a subset of cases, stereological counts of cortical and striatal neurons were performed using the cresyl violet-stained sections. These studies showed that the two methods produced indistinguishable results. Image analysis of immunolabeling in striatal target areas To assess any possible pathology to the neurons of origin of the striatal projection systems, blinded computer-assisted image analysis was carried out on immunolabeled striatal terminals in each of the two main striatal projection targets in rodents, the GPe (n = 8 Emx-htt KO , 14 control), GPi (n = 4 Emx-htt KO , 5 control) and SN (n = 4 Emx-htt KO , 5 control). For these studies, the extent and immunolabeling intensity of the ENK fiber plexus in GPe, and the extent and immunolabeling intensity of the SP fiber plexi in GPi and substantia nigra were analyzed. Images of individual fields were captured using a 4× objective, and analyzed using ImageJ (v. 1.61), as in prior studies (Meade et al., 2000;Sun et al., 2002;Reiner et al., 2007Reiner et al., , 2012a. Fiber abundance for a given structure in a given case was expressed as the mean area occupied by labeled fibers for all measurements for that case. This provided measures of the abundance of labeled fibers per striatal target and of the intensity of peptide within them (reflecting peptide abundance). We have used this approach in prior studies on human HD brain or the brain in HD animal models (Figueredo-Cardenas et al., 1994;Deng et al., 2004;Reiner et al., 2012a). Statistics Data from the behavioral, immunohistochemical and stereological studies were analyzed statistically by unpaired two-tailed t-tests comparing control and Emx-htt KO mice, with the exception that the relative performance of control and Emx-htt KO mice on rotarod across trials was analyzed using one-way ANOVA. Regression analysis was used to evaluate any changes in any of the parameters over the age-range examined in the control and Emx-htt KO mice. Regression analysis and t-tests were performed using Excel, and ANOVA was performed using SPSS. Analysis of covariance was used to determine if the slope of the regression line for age-related effects in the control mice differed from that for Emx-htt KO mice for any of the morphological or behavioral parameters, using online tools (http://www.danielsoper.com/). Generation of Emx-htt KO mice A mutant mouse line (Mouse Genome Informatics designation, Htt tm2Szi ) was previously created in which lox sequences were inserted that flank the 1.3kB region upstream of the Hdh transcription initiation site and intron1 (Fig. 1A, B) . We used this mouse line in combination with a previously described Emx1 IREScre/+ mouse line (Mouse Genome Informatics designation, Emx1 tm1(cre)Krj ) to direct Hdh deletion from cortical pyramidal neurons (Gorski et al., 2002). To this end, we bred Hdh +/− mice (Mouse Genome Informatics designation, Htt tm1Szi ) with Emx1 IREScre/+ mice, and the resulting Emx1 IREScre/+ ; Hdh +/− offspring were then crossed with Hdh flox/flox mice (Mouse Genome Informatics designation, Htt tm2Szi ) for the generation of Em-x1 IREScre/+ ; Hdh flox/− mice (referred to here as Emx-htt KO mice for simplicity). As expression of emx1 begins in the developing cortex at E8 (Simeone et al., 1992;Yoshida et al., 1997) and is robust by E10.5 (Gorski et al., 2012), cre driven by an emx1 promoter deletes Hdh from cortical pyramidal neurons early in cortical development. We confirmed that cre expression is specific for pallium in Emx1 IREScre/+ mice by crossing them to R26R reporter mice (Gtrosa26 tm1Sor ), which express β-galactosidase upon cre-mediated recombination. Staining of forebrain sections from Emx1 IREScre/+ ; R26R/+ offspring by X-gal histochemistry showed that cre driven LacZ expression in these mice is restricted to the cerebral cortex (Figs. 1C). Specificity of huntingtin deletion In the Emx-htt KO mice, Western blot analysis for both 5 month old and 12 month old mice showed a significant 85.3% reduction of huntingtin protein in cerebral cortex of 5 control and 5 Emx-htt KO mice (p = 0.00000005), confirming the efficacy of deletion from cortical pyramidal neurons ( Fig. 2A, C). Residual cortical huntingtin is likely to be localized to interneurons, due to their origins from a subpallial lineage not expressing emx1, and/or to cells of the vasculature. Immunolabeling for huntingtin with the D7F7 antibody from Cell Signaling confirmed the greatly diminished expression of huntingtin in neuronal perikarya and the neuropil of cerebral cortex of Emx-htt KO mice (Fig. 3A, B), with the only residual labeling being in presumptive cortical interneurons and in the terminals of the thalamic projection to layer 4. As also observed by McKinstry et al. (2014), Western blotting revealed a substantial and significant reduction (82.3%, p = 0.00000025) in huntingtin in striatum of the Emx-htt KO mice as well (Fig. 2B, C). Immunolabeling with D7F7 confirmed this finding and showed a prominent reduction of huntingtin in the striatal neuropil of Emx-htt KO mice, with perikaryal labeling remaining prevalent (Fig. 3C, D). Most conspicuous among the huntingtin-immunolabeled striatal neurons were perikarya the size and shape of cholinergic and parval-buminergic interneurons, which have been previously reported to express high levels of huntingtin (Fusco et al., 1999). The bulk of the prominent neuropil labeling in striatum thus appears to represent labeling of cortical terminals for huntingtin, as also suggested by McKinstry et al. (2014). Consistent with the expression of huntingtin by striatal projection neurons (Fusco et al., 1999) and its anterograde axonal transport (Block-Galarza et al., 1997;McKinstry et al., 2014), the terminals in the three major striatal target areas -GPe, GPi, and sub-stantia nigra -were rich in huntingtin in control mice (Fig. 4). The abundance of huntingtin in striatal terminals in GPe, GPi and the nigra in Emx-htt KO mice appeared to be as ample as in age-matched control mice (Fig. 4), suggesting that huntingtin production by striatal neurons themselves was not diminished by the genetic deletion of Hdh from cortical pyramidal neurons. These findings further indicate that the diminished huntingtin in striatum of the Emx-htt KO mice is likely to reflect loss of huntingtin from corticostriatal terminals rather than from striatal neurons. The deletion of Hdh from cortical neurons of the emx1 pyramidal neuron lineage was associated with a significant 37.9% reduction in BDNF levels in cerebral cortex (n = 5 control, 5 Emx-htt KO , p = 0.000031) (Fig. 5A, D), caused by reduced expression of BDNF message by cortical pyramidal neurons (Fig. 5B, C), and as a result a reduction in BDNF protein in cortical pyramidal neurons (Fig. 6A, B). Levels of BDNF in layer 5 cortical neurons, where corticostriatal neurons reside (Reiner et al., 2010), appeared to be reduced by about 40-50% in both immunolabeled sections and ISHH sections. Western analyses were performed to evaluate the impact of the cortical deletion of huntingtin from pyramidal neurons on Akt and ERK signaling pathways, both of which are driven by BDNF. We found that both phospho-Akt and phospho-ERK were significantly reduced in cerebral cortex of Emx-htt KO mice compared to control mice (n = 5 control, 5 Emx-htt KO ) (Fig. 5A, D), indicating reduced signaling by both BDNF-driven pathways. Phospho-Akt (pAkt) was reduced by 44.6% in Emx-htt KO cortex (p = 0.000040), and phospho-ERK (pERK) was reduced by 70.8% in Emx-htt KO cortex (p = 0.00000013). ERK itself was unchanged (Fig. 5A, D). Forebrain morphology and cortical organization General forebrain morphology and cortical organization appeared normal in adult mice with early embryonic deletion of huntingtin from cortical pyramidal neurons. For example, NeuN immunostained sections cut transverse to the long axis of the brain show that at 10.5 months of age, the overall structure of the forebrain of Emx-htt KO mice was indistinguishable from that in control mice at the same age (Fig. 7). Moreover, cortical lamination was normal in terms of the relative location of layer-specific cell populations such as the lightly labeled calbindinergic projection neurons in layers 2-3, VGLUT2+ fibers in layer 4, SMI-32+ and Ctip2+ neurons in layer 5, and FoxP2+ neurons in layer 6 ( Fig. 8), as well as in terms of the thickness of these layers. To determine if the cortical huntingtin deletion had an adverse effect on cortical and/or striatal neuron abundance (either due to deficient neurogenesis or diminished survival), we performed blinded stereological assessment of cortical volume and neuron abundance (n = 15 control, 9 Emx-htt KO ). We found that cerebral cortex volume in Emx-htt KO mice was slightly but significantly smaller than in control mice (8.7% reduction) overall across the 3.2 to 23.6 month lifespan examined (p = 0.0011). The volume difference did not progress with age, and the regression lines for volume as a function of age remained largely parallel for control and Emx-htt KO mice throughout the ages examined (Fig. 9A). Similarly, cortical neuron abundance in mice with early embryonic deletion of huntingtin from cortical pyramidal neurons was significantly less than in control mice (22.3% reduced) overall across the 3.2 to 23.6 month lifespan examined (p = 0.0006) (Fig. 9C). This reduction was evident in both counts of NeuN-immunolabeled and cresyl violet-stained tissue. As in the case of cortical volume, the cortical neuron abundance difference did not progress significantly with age, and the regression lines for volume as a function of age remained largely parallel for control and Emx-htt KO mice throughout the ages examined. Consistent with the absence of progressive cortical degeneration in Emx-htt KO mice, no astrocytic reaction was evident in the Emx-htt KO mice at any age (Fig. 8K, L). Striatal morphology and neurochemistry The deletion of huntingtin from cortical pyramidal neurons during embryogenesis had an effect on striatal volume and neuron abundance. Striatal volume was significantly reduced by 14.0% (p = 0.0048) and neuronal abundance was significantly reduced by 20.5% (p = 0.0004). As in the case of cortex, age-related decline in striatal volume and neuron abundance was not statistically evident, and age-related patterns were not different than in the control mice in any case (Fig. 9B, D). Despite the reduced neuron abundance, striatal neurochemistry was normal in Emx-htt KO mice, based on ISHH labeling for PPE, PPT, D1 and D2 perikarya in striatum, immunolabeling for D1 and DARPP32 in striatum, and immunolabeling for DARPP32, ENK, D1, and SP terminals in striatal target areas. The spatial abundance per square millimeter and labeling intensity of PPE perikarya per section in 18.2-month old Emx-htt KO mice (n = 2) was indistinguishable from that in age-matched control mice (n = 2) (Fig. 10A, E), and the abundance of ENK+ terminals in GPe in Emx-htt KO mice (54.9% occupancy of GPe) was not significantly different than that in control mice (53.0% occupancy of GPe) for a sample of 14 control and 8 cortical KO mice ranging in age from 8.4 to 23.6 months (p = 0.5061) (Fig. 10I, L). No significant age related changes were seen in ENK+ striato-GPe terminal abundance in control or Emx-htt KO mice. Similarly, the abundance and density of PPT perikarya per section in 18.2-month old Emx-htt KO mice (n = 2) was comparable to that in age-matched control mice (n = 2) (Fig. 10C, G), and the abundances of SP+ terminals in the GPi (38.4% in control GPi versus 41.5% in Emx-htt KO GPi) and the substantia nigra (40.0% in control SN versus 38.7% in Emx-htt KO SN) of Emx-htt KO mice were not significantly different from that in control mice, for a sample of 4 Emx-htt KO mice and 5 control mice (GPi, p = 0.3708; SN, p = 0.7336). No significant age-related changes were seen in SP+ striato-GPi or striato-SN terminal abundance in control or Emx-htt KO mice. Similar quantitative results were obtained for D2 (Fig. 10B, F) and D1 ISHH (Fig. 10D, H) as for PPE and PPT ISHH, respectively. Note that because of the volumetric reduction, striatal neuron density per square millimeter was only 8.3% less than control in the Emx-htt KO mice. This may explain why the spatial density of PPT, PPE, D1 and D2 striatal neurons appeared indistinguishable from normal in Emx-htt KO mice, despite the overall reduction in striatal neuron abundance. Behavioral data The Emx-htt KO mice were indistinguishable from control mice in weight throughout their lifespan, and both groups showed a significant but normal trend toward age-related weight gain (Fig. 11A). By contrast, Emx-htt KO mice showed a mild and non-progressive mean rotarod deficit throughout their lifespan (Fig. 11B), for 23 WT and 14 Emx-htt KO measured time points. Additionally, the Emx-htt KO mice were persistently and significantly poorer on rotarod than control mice over the repeated trials on the day of testing, as assessed by one-way ANOVA (p = 0.00033). Emx-htt KO mice were also abnormal in their open field behavior. In particular, they were hyperactive compared to control mice, traversing a 19.2% greater total distance than age-matched control mice (p = 0.0299) (Fig. 12A). Underlying this increase in distance were a significant 17.9% increase in speed (p = 0.0226), a significant 74.3% increase in the unit of locomotion referred to as progression segment length (p = 0.0087), a significant 34.4% increase in endurance (the ratio of distance traveled in the second 15 min compared to the first 15 min of the open field session) (p = 0.00002), and a significant 35.2% decrease in the number of stops per unit distance (p = 0.0018) (Fig. 12B-E). The increase in the length of progression segments was associated with a significant 22.1% decrease in the frequency of their occurrence (p = 0.0262) (Fig. 12F), which was not enough to offset the other factors driving the activity increase. Discussion Our results in Emx-htt KO mice show that early embryonic deletion of huntingtin from the developing pallium yields reduced numbers of cortical neurons and reduced cortical volume in adults but no evident abnormalities in cortical lamination. Striatal neurons were also reduced in their abundance, as was striatal volume, but striatal neurons were normal in their neurochemistry. Expression of huntingtin by cortical pyramidal neurons thus appears critical for development of both normal cortical neuron and striatal neuron abundance. For both cortex and striatum, it seems likely that the neuron reduction contributes importantly to the volume reduction we observed in Emx-htt KO mice. Neither cortical nor striatal neurons, however, showed accelerated age-related loss in Emx-htt KO mice compared to control mice out to nearly 2 years of age. The Emx-htt KO mice were hyperactive in open field and slightly defective on rotarod at all ages, but normal in body weight. These findings are discussed in more detail below. Effect on cortical and striatal development Among its functions, huntingtin interacts with microtubules, the dynein/dynactin complex, and kinesin to regulate the microtubule-dependent transport of proteins and organelles in neurons (Caviston et al., 2007;Colin et al., 2008;Gauthier et al., 2004;McGuire et al., 2006;Saudou and Humbert, 2016). Huntingtin appears to also play a role in cell division by means of its presence in the spindle microtubules of the centrosome of dividing cells, and huntingtin knockdown has been found to reduce cell division in vitro (Godin et al., 2010). Genetic manipulations in mice have confirmed that huntingtin is critical for embryogenesis, with homozygous deletion of the mouse homologue of the human HD gene (i.e. Hdh) resulting in apoptosis in the embryonic ectoderm shortly after the onset of gastrulation, yielding embryos that are developmentally retarded and disorganized prior to death between embryonic days 8.5 and 10.5 (Duyao et al., 1995;Nasir et al., 1995;Zeitlin et al., 1995;Dragatsis et al., 1998;Dragatsis and Zeitlin, 2001). An important role of huntingtin in vesicular trafficking of nutrients across extraembryonic membranes appears to be a critical factor in the deleterious effect of huntingtin deletion in embryos (Dragatsis et al., 1998). While huntingtin levels that are 50% of normal (as in hemizygous Hdh deletion) do not obviously hinder development (Ambrose et al., 1994;Duyao et al., 1995;Nasir et al., 1995;Zeitlin et al., 1995;Persichetti et al., 1996), reduction of huntingtin to levels between 25% and 50% of normal during early development results in defective neurogenesis and profound malformations of cerebral cortex and striatum (White et al., 1997). In the present study, we evaluated the impact of selective early embryonic deletion of huntingtin from cortical pyramidal neurons, using the cre-loxP system to inactivate the mouse huntingtin gene (Hdh) in emx1-expressing cell lineages beginning early in cortical development. Based on the prior findings implicating huntingtin in cell division and neuronal migration (Godin et al., 2010;Tong et al., 2011), we expected reduced cortical neuron numbers and laminar disorganization. We found about a 20% reduction in cortical neuron abundance and about a 10% reduction in cortical volume, but like McKinstry et al. (2014) we saw no evident alteration in laminar organization, in our case using layer-specific markers. It seems likely that the cortical neuron reduction was predominantly in pyramidal neurons, since cortical interneurons, which arise from the medial ganglionic eminence of the subpallium and migrate to the pallium (Parnavelas, 2000), would not have undergone huntingtin deletion, as consistent with our huntingtin immunolabeling results. McKinstry et al. (2014) did not note reduced neuron numbers in cortex at 3-5 weeks of age in Emx-htt KO mice, but they assessed neuron abundance by areal density rather than by the 3-dimensional stereological methods we used. Although it is possible that the loss occurs after 5 weeks, our stereological counts indicate a shortfall in cortical neurons already at 3 months of age, which is not evident from neuronal areal packing density assessments alone. Our results are thus consistent with the prediction that cortical pyramidal neuron huntingtin deletion should affect cortical neuron abundance, due to the presumptively impaired neurogenesis caused by the huntingtin deletion from developing pallial neurons. In a prior study involving injection of Hdh −/− embryonic stem cells into mouse blastocysts, we found that Hdh −/− cortical neuroblasts may undergo premature death (Reiner et al., 2001(Reiner et al., , 2003, suggesting that such death could also contribute to the reduced abundance of cortical neurons in Emx-htt KO mice. With regard to the apparent normalcy of cortical lamination, it may be that the previously reported defect in the migration of early born cells (Tong et al., 2011) is transient and self-correcting, or that any cells that migrate incorrectly either die or adopt the phenotype of the layer to which they incorrectly migrate. As cortical neurons appear specified shortly after they are born (Greig et al., 2013;Woodworth et al., 2016) and neuronal re-specification does not seem to occur in other disorders involving cortical laminar disorganization (D'Arcangelo et al., 1995;Ikeda and Terashima, 1997;Rice et al., 1998), it seems more likely that any migration abnormality of early born cells is either transient or resolved through death of ectopic neurons. More recently, Barnat et al. (2017) reported that with huntingtin deletion ~10% of cortical cells born after E15.5 fail to correctly migrate to layers 2-3 of cortex and are mis-localized to the deeper layers in adult mouse cerebral cortex. If such a migration defect occurred during cortical development in our Emx-htt KO mice, it was not substantial enough to yield any evident thinning or mis-alignment of layers 2/3. In the present study, we also observed a 20% reduction in striatal neuron abundance in Emx-htt KO mice. The reduced expression of BDNF by cortical neurons, likely to stem at least in part from the loss of the positive effect of huntingtin on BDNF synthesis (Zuccato et al., 2001), may have been a major factor in the reduced striatal neuron abundance. BDNF produced by cortical neurons is transported axonally to the striatum and released (Altar et al., 1997;Zuccato et al., 2001;Gauthier et al., 2004;Zuccato and Cattaneo, 2007), promoting survival of neurons in striatum (Lessmann et al., 2003;Poo, 2001;Zuccato and Cattaneo, 2007). Consistent with a trophic effect of corticostriatal BDNF, cortex-specific embryonic BDNF knock-out or embryonic deletion of the BDNF receptor (TrkB) from striatal neurons results in striatal neuron deficiency (Gorski et al., 2003;Baquet et al., 2004;Strand et al., 2007;Baydyuk et al., 2011;Baydyuk and Xu, 2014). Thus, the reduced corticostriatal BDNF in Emx-htt KO mice we observed may have caused diminished striatal neuron survival (Davies, 1996). As BDNF also exerts trophic effects on cortical neurons, its reduction in cortical pyramidal neurons in the mice with cortical deletion of Htt may also have contributed to the reduced neuron numbers in cerebral cortex. Note that since the glutamine repeat expansion in huntingtin causing HD can adversely affect its function (Barnat et al., 2017), our results with embryonic deletion of huntingtin from cortical neurons suggest the mutation in huntingtin could have an adverse developmental impact on neuron abundance and the connectivity of cortex and striatum, which may help explain the typically smaller brain size in HD gene carriers even at young ages much prior to any disease symptoms (Nopoulos et al., 2011;Lee et al., 2012), and could ultimately contribute to HD pathogenesis as well. Effect on long-term cortical and striatal neuron survival Huntingtin appears to exert a neuroprotective effect (Rigamonti et al., 2000;Kalchman et al., 1997;Hackam et al., 2000;Gervais et al., 2002), in part by promoting neuronal survival via a stimulatory effect on production of BDNF (Zuccato et al., 2001). Huntingtin may have this effect on BDNF production by means of an interaction with the transcriptional regulator Sp1, since the BDNF gene is known to possess an Sp1 response element in its proximal promoter region, and Sp1-dependent transcription is diminished by mutant huntingtin (Luthi-Carter et al., 2002). Corticostriatal BDNF exerts an important receptor-mediated, pro-survival effect in neurodegenerative diseases such as Huntington's disease (Altar et al., 1997;Ivkovic and Ehrlich, 1999;Schuman, 1999;Reiner et al., 2012b). For this reason, it might be expected that deletion of huntingtin from cortical pyramidal neurons would have a long-term adverse effect on age-related cortical and striatal neuron survival. In the present study, however, we found that cortical and striatal neurons showed age-dependent survival in Emx-htt KO mice that was no different than in control mice, despite the reduced cortical BDNF production and BDNF signaling. Perhaps after the early loss of a sensitive neuronal population, additional cell losses did not occur in Emx-htt KO mice due to a unique sensitivity of these early-lost neurons or the activation of compensatory mechanisms. Our present results are also surprising in light of prior studies by using conditional knockout of Hdh in mouse forebrain during late embryonic or perinatal development (Table 2). These studies used conditional deletion of Hdh in two separate lines of mice with cre expression driven by the promoter for the alpha subunit of calcium dependent calmodulin kinase-2 (Camk2a), either starting after E15 in one line or after postnatal day 5 in the other . Since the forebrain in rodents is still developing during the first postnatal week (which roughly corresponds to the third trimester in humans), the Hdh deletion in both lines of mice occurs during the late stages of forebrain development. Hdh deletion beginning around E15 resulted in an earlier onset of behavioral abnormality (clasping) and forebrain degeneration, while the later deletion of Hdh yielded a milder phenotype. Reactive astrocytosis and axonal degeneration were observed at 3-4 months of age in forebrain in E15 deletion mice, and some of these showed degeneration in striatum, caudolateral cerebrum and amygdala at 8 months . As the neuronal death in forebrain occurs months after the elimination of forebrain Hdh expression at E15, it might be expected that cortical pyramidal neuron elimination of Hdh at E10.5 as in our study would also adversely affect long-term survival of at least cortical neurons. That we did not find this to be the case suggests that E15 forebrain Hdh deletion is not equivalent to the E10.5 pallium-specific Hdh deletion in our Emx-htt KO mice. It is worth noting that the Emx1-cre driven deletion occurs in both neurons and glia (Gorski et al., 2002), whereas Camk2a transgene expression is usually restricted to neurons; perhaps an imbalance in the effects of Hdh deletion on neurons and glia leads to a different phenotype. It would be useful to know the particular fate of cortical pyramidal neurons with E15 forebrain Hdh deletion. It may be that they, like cortical pyramidal neurons in our Emx-htt KO mice, do not show progressive degeneration. Alternatively, a compensation for Dragatsis et al. Page 13 Neurobiol Dis. Author manuscript; available in PMC 2018 March 01. Author Manuscript Author Manuscript Author Manuscript Author Manuscript huntingtin deletion (e.g. elevated expression of an alternative trophic factor) may occur with E10.5 deletion that does not with E15 deletion. In this regard, comparison of our results with studies of the effects of adult deletion of huntingtin using inducible Cre/Lox systems are also of interest (Table 2). Wang et al. (2016) recently reported that huntingtin deletion from neurons at 2, 4 or 8 months of age, using Hdh flox/flox mice expressing a nestin promotor-driven CreER treated with tamoxifen, yielded no change in body weight and no motor abnormalities up to 6-7 months after depletion and no evident brain pathology or volume reduction up to 3 months after depletion. Careful morphometric studies such as performed here were, however, not conducted, and so the possibility of neuron loss in cortex or striatum cannot be excluded. By contrast, Pla et al. (2013) found that depletion of huntingtin from hippocampal neurons at 2 months of age, using a similar approach but with a CAMK2a promotor-driven CreER, caused a deficit in the maturation and survival of adult-generated hippocampal neurons, due to a decline in BDNF-mediated trophic support of the new neurons. Whether overall reductions in cortical and hippocampal neuron abundance occurred, or only in newborn hippocampal neurons, was not addressed. In a more extensive study (Dietrich et al., 2017), adult Hdh flox/− mice carrying the tamoxifen-inducible CAAGG-CreER™ allele were injected with tamoxifen to induce global cre-mediated recombination and huntingtin elimination at 3, 6 or 9 months of age. A progressive rotarod impairment was evident by one month after tamoxifen treatment, and severe gait abnormalities, hind limb clasping on tail suspension, and resting tremors were seen by 16 months of age in all mice with deletion. Neuropathological changes included progressive brain weight loss and bilateral thalamic lesions. Neither overt cortical nor overt striatal neuron loss was observed up to over a year after huntingtin deletion. These studies collectively show that adult huntingtin deletion can have adverse effects on brain morphology and function, but they do not resolve if adult deletion of huntingtin from cortical pyramidal neurons adversely affects their survival over a mouse lifetime, or if it is as well tolerated as after early embryonic deletion as in Emx-htt KO mice. Behavioral abnormalities in Emx-htt KO mice We observed rotarod impairment, open field hyperactivity, but normal weight maintenance in Emx-htt KO mice. Of interest, mice with early embryonic deletion of the torsin gene (Dyt1) from the cortical pyramidal neuron lineage similarly show motor impairments and hyperactivity (Yokoi et al., 2008). Mutation of the Dyt1 gene is associated with early onset generalized dystonia, and the authors attributed their phenotype to some unknown cortical abnormality. It seems likely that abnormalities in cortical and corticostriatal development with early embryonic huntingtin deletion from cortical pyramidal neurons are the basis of the motor deficit and hyperactivity we observed. Along these lines, McKinstry et al. (2014) reported that in Emx-htt KO mice, created with the same approach as we used, cortical and corticostriatal excitatory synapses form and mature at an accelerated pace through postnatal day 21 (P21). This exuberant synaptic connectivity within cortex was lost by 5 weeks, but retained yet at 5 weeks in the case of corticostriatal synapses. It may be that this exuberant corticostriatal connectivity, if persistent, could account for the hyperactivity we saw in our mice. Given the current understanding of basal ganglia functional organization, preferentially increased input to the striatal go-neurons of the direct pathway would be one way in which exuberant corticostriatal connectivity in cortical Htt-KO mice could yield hyper-activity (Deng et al., 2014). It is also possible that the massive loss of huntingtin from corticostriatal terminals in the Emx-htt KO mice accounts for the phenotype. Huntingtin has been reported to associate with synaptic vesicles and facilitate neurotransmitter release in pre-synaptic excitatory synaptic terminals (DiFiglia et al., 1995;Rozas et al., 2011). If the huntingtin depletion preferentially diminished neurotransmitter release from cortical terminals ending on indirect pathway neurons, for example those of the pyramidal tract type corticostriatal neurons (Deng et al., 2015), hyperactivity would be the predicted outcome (Albin et al., 1989). Implications for HD therapy One possible avenue for HD treatment involves gene therapy to prevent production of the mutant protein, typically using either anti-sense DNA oligonucleotides (ASOs) against mutant huntingtin mRNA, or RNA that interferes with huntingtin expression (RNAi) (Crook and Housman, 2013;Ramaswamy and Kordower, 2012). Several studies have shown that direct delivery of RNAi molecules to the forebrain via an AAV vehicle can reduce mutant Htt production and slow phenotypic progression in mouse HD models (DiFiglia et al., 2007;Wang et al., 2005). Nuclease resistant ASOs have been developed that target human Htt message, and shown to be effective following intraventricular infusion in reducing mutant human Htt and improving motor function in mice (Kordasiewicz et al., 2012;Southwell et al., 2014;Stanek et al., 2013). Given the above evidence of an important role of wild-type (WT) huntingtin in neuronal survival, and the evidence that mutant huntingtin is more deleterious on a background of reduced WT huntingtin expression (Auerbach et al., 2001), safety concerns remain for gene therapy approaches that seek to reduce mutant huntingtin at the expense of also reducing WT huntingtin. Several studies have, however, shown that knocking down both mutant and WT huntingtin by 60-75% can achieve the benefit of mutant knockdown without an evident harm of WT allele co-knockdown (Boudreau et al., 2009), and others have shown that knockdown of WT huntingtin alone in striatum is well tolerated in mice (Grondin et al., 2012;McBride et al., 2008). Our present results show that complete deletion of huntingtin from cortical pyramidal neurons and glia beginning early in cortical development is without evident harmful consequences for the long-term postnatal survival of cortical and striatal neurons. Although some early compensation for the loss of huntingtin may be critical to this effect, our findings nonetheless do not support the view that the complete elimination of WT huntingtin from pyramidal neurons is invariably harmful for cortex and striatum. Whether elimination of WT huntingtin from pyramidal neurons is harmful when done in adults, or whether striatal neuron huntingtin depletion at any point in the lifespan is harmful for striatum, needs fuller examination. Moreover, even if neither cortical nor striatal huntingtin depletion are individually harmful, combined cortical and striatal huntingtin depletion might compromise striatum by the double hit of huntingtin insufficiency and corticostriatal BDNF reduction (Reiner et al., 2012b). assistance with this work. This research was supported by NS-057722 (AR), NS-028721 (AR), NS-098137 (AR), and The Methodist Hospitals Endowed Professorship in Neuroscience (AR), EY-14998 (KRJ). Fig. 1. A mutant mouse line (Htt tm2Szi ) was created in which loxP sequences were inserted that flanked the 1.3 kB region upstream of the Hdh transcription initiation site and intron 1 . Images A and B show a schematic of the wild-type Hdh allele and the floxed Hdh allele before recombination (A), and after recombination (B), the latter generating an Hdh allele lacking the promoter and exon 1. To evaluate the specificity of emx1-cre driven recombination in mouse forebrain that would be achieved by crossing Emx1 IREScre/+ mice with Htt tm2Szi mice, we crossed Emx1 IREScre/+ mice with R26R reporter mice (Gtrosa26 tm1Sor ) possessing a floxed lacZ-gene that expresses β-galactosidase upon recombination by cre. Image C shows the results for a section through the telencephalon from an Emx1 IREScre/+ ; R26R offspring that had been X-gal stained. The results confirm that cre expression in Emx1 IREScre/+ mice is specific for cerebral cortex. LacZ labeling in striatum is in corticofugal fibers. Western blot analyses of total protein lysates from cortex and striatum of control and Emx-htt KO mice. Protein lysates were separated in 8% SDS-PAGE and transferred to nitrocellulose membrane. The upper panel in image A shows detection of huntingtin with a mouse monoclonal anti-huntingtin antibody 2166, and the lower panel shows antiβ-tubulin staining as a loading control, from 5-month-old control and Emx-htt KO mice. Note that huntingtin protein is substantially reduced in cortex of Emx-htt KO mice, confirming knockout efficacy. Huntingtin protein was also substantially reduced in striatum, as shown in image B comparing 5-month-old control and Emx-htt KO mice. As explained in the text, this is likely to largely represent depletion of huntingtin from corticostriatal terminals, which appears to be the predominant source of huntingtin in striatum. The graph in image C shows densitometric analysis of Western blot results for huntingtin in cerebral cortex and striatum in 5 control and 5 Emx-htt KO mice at 5-or 12-months of age, normalized to tubulin, confirming that the reduction of huntingtin in Emx-htt KO cortex and striatum is highly significant (asterisks). Fig. 3. Images A-D show immunolabeling in control and Emx-htt KO cortex at low power and in control and Emx-htt KO striatum (Str) at slightly higher power using the D7F7 antibody against huntingtin, which labels huntingtin in axons, terminals, neuropil and the cytoplasm of perikarya. Note that huntingtin is reduced in the cortex in Emx-htt KO mice except in the terminals of thalamic input in layer 4 and in presumptive interneurons. In striatum, huntingtin is reduced in the neuropil in Emx-htt KO mice, but not in striatal perikarya themselves. Note that with the loss of corticostriatal huntingtin from the striatal neuropil, the huntingtin-immunolabeled striatal perikarya are more salient. The mice shown for immunolabeling were 10.8 months old. Scale bar in B applies to A as well, and the scale bar in D applies to C as well. Immunolabeling with D7F7 showing huntingtin in terminals in the striatal target areas GPe (globus pallidus externus), GPi (globus pallidus internus) and SN (substantia nigra) in control and Emx-htt KO mice. Note that anti-huntingtin labeling in striatofugal terminals in GPe, GPi and SN in Emx-htt KO mice is indistinguishable from that in control mice, suggesting that striatal projection neuron huntingtin production and axonal transport was unaffected by the cortical huntingtin deletion. Images showing cortical BDNF reduction in Emx-htt KO mice by Western Blot (A) and in situ hybridization histochemistry (ISHH) (B, C). Image A shows Western blot analyses of total protein lysates from cortex of 5-month-old control and Emx-htt KO mice. Protein lysates were separated in 12% SDS-PAGE and transferred to nitrocellulose membrane, which were then sequentially probed with anti-BDNF, anti-phospho-Akt, (pAkt) anti phospho-ERK1/2 (pERK), anti-ERK1/2 and anti-β-tubulin. Note that BDNF, phospho-Akt and phospho-ERK are substantially reduced in cortex of Emx-htt KO mice. Images B and C show BDNF ISHH labeling of cerebral cortex from an 18-month old control and Emx-htt KO mice. Consistent with the Western blot data, BDNF expression is reduced in cortical pyramidal neurons of layers 2/3 and 5 in the Emx-htt KO mice. The graph in image D shows densitometric analysis of Western blot results for cerebral cortex in 5 control and 5 Emx-htt KO mice (at 5 and 12 months of age), confirming that the reduction in BDNF, pAkt, and pERK (asterisks) in Emx-htt KO cortex is highly significant. Images showing cortical BDNF reduction in Emx-htt KO Images A and B show low power views of NeuN immunostained transverse sections (medial to left) revealing that the overall structure of the forebrain in Emx-htt KO mice (B) at 10.5 months of age is indistinguishable from control mice (A) at the same age. Images C and D present higher power views of motor cortex (medial to left) in NeuN immunostained transverse sections, showing that the structure of the cerebral cortex (C, D) in Emx-htt KO mice at 10.5 months of age (D) is indistinguishable from control mice (C) at the same age. Images of immunolabeled transverse sections showing that cortical lamination was normal in Emx-htt KO mice in terms of the relative location of layer-specific cell populations such as calbindinergic neurons in layers 2-3 (A, B), VGLUT2+ fibers in layer 4 (C, D), SMI-32+ (E, F) and CTIP2+ (G, H) neurons in layer 5, and FoxP2+ neurons (I, J) in layer 6, which confirmed the absence of obvious laminar abnormalities. GFAP immunolabeling (K, L) did not reveal any differences between Emx-htt KO and control mice. Cortical volume was persistently less in Emx-htt KO mice (n = 9) than in control mice (n = 15) throughout the life span of the mice, by about 9% overall (A). The trend was, however, not progressive, as the decline in cortical volume in Emx-htt KO mice by 24 months of age was no different than in control mice, and in neither case was volume significantly correlated with age. Similarly, striatal volume was also persistently less in Emx-htt KO than in control mice throughout the life span of the mice, by about 14% overall (B). The trend again was, however, not progressive, as the change in striatal volume in Emx-htt KO mice by 24 months of age was no different than in control mice, and in neither case was volume significantly correlated with age. Cortical and striatal neuron abundance (C, D) in Emx-htt KO mice were also less over the lifespan of the mice (cortex 22.3% less, striatum 20.5% less), but the difference did not become significantly enhanced with age, nor was neuron abundance in either control or Emx-htt KO mice significantly correlated with age. Analysis of covariance confirmed that the slope of the regression lines did not differ significantly between control and Emx-htt KO mice for any of these four parameters. Striatal neurochemistry is normal in Emx-htt KO mice based on in situ hybridization histochemistry (ISHH) for preproenkephalin (PPE), preprotachykinin (PPT), D1 receptors and D2 receptors in striatum, and immunolabeling of enkephalinergic striatal terminals in GPe and of substance P-containing terminals in GPi and SN. As reflected by the ISHH images shown, Emx-htt KO mice did not show obvious neurochemical abnormalities for striatal PPE (A, E) or D2 (B, F) in indirect pathway neurons or for striatal PPT (C, G) or D1 (D, H) in direct pathway neurons compared to control mice. All mice used for these images were 18.2 months old at the time of sacrifice. Similarly, immunolabeling for enkephalinergic (ENK) striatal terminals in GPe (I, L) and for substance P-containing striatal terminals in GPi (J, M) and SN (K, N) did not differ between control and Emx-htt KO mice. Scale bar in image H is applicable to images A-H, and scale bars in images L, M and N are applicable to images I, J and K, respectively. The Emx-htt KO mice do not show weight loss compared to control mice over their lifespan (A), but do show a mild rotarod defect over their lifespan (B). Seventeen control mice (13 males, 4 females) and 11 Emx-htt KO mice (8 males, 3 females) were studied for weight, in some cases at different points during the lifespan of individual mice, yielding 19 WT and 12 Emx-htt KO measured time points. In the case of rotarod, seventeen control mice (13 males, 4 females) and 11 Emx-htt KO mice (8 males, 3 females) were studied, in some cases at different points during the lifespan of individual mice, yielding 23 WT and 14 Emx-htt KO measured time points. Overall means for control and Emx-htt KO mice, irrespective of age, were compared by unpaired two-tailed t-test, while age-related trends were assessed by regression analysis and ANCOVA. The Emx-htt KO mice are hyperactive in open field, especially under 1 year of age, as revealed by their increased distance traveled (A), increased progression segment length (B), increased speed (C), increased endurance (D), and decreased number of stops (E). The increase in progression segment length was so great that the Emx-htt KO mice actually performed significantly fewer progression segments (F) than did control mice. Seventeen control mice (13 males, 4 females) and 11 Emx-htt KO mice (8 males, 3 females) were studied for open field analysis, in some cases at different points during the lifespan of individual mice, yielding 23 WT and 14 Emx-htt KO measured time points. Control and Emx-htt KO mice means were compared by unpaired two-tailed t-test, while age-related trends were assessed by regression analysis and ANCOVA.
2017-12-29T18:08:23.780Z
2017-12-21T00:00:00.000
{ "year": 2017, "sha1": "8a10b8ffd5bf78cf37b5dce7437e71fa9c282c24", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.nbd.2017.12.015", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e69e0cdf711296d68994f499dd20a909a3215277", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233660749
pes2o/s2orc
v3-fos-license
Post-treatment Mac-2-Binding Protein is a Useful Predictor of Hepatocellular Carcinoma Development after Hepatitis C Virus Eradication Background and aims: Recent advances of direct-acting antiviral drugs for hepatitis C virus (HCV) have dramatically improved the sustained virologic response (SVR) rate, but hepatocellular carcinoma (HCC) development not rarely occurs even in patients who achieve an SVR. Wisteria floribunda agglutinin-positive Mac-2-binding protein (WFA+-M2BP) was recently developed as a noninvasive biomarker of liver fibrosis. However, the association between the WFA+-M2BP level and HCC development after the achievement of an SVR is unclear. Methods: We examined the association between WFA+-M2BP and HCC development in 552 HCV patients who achieved an SVR (Interferon [IFN]-based therapy, n=228; IFN-free therapy, n=294). Results: Multivariate analysis revealed that a high WFA+-M2BP level at SVR week 24 after treatment (SVR24) (hazard ratio [HR]=1.215, P=0.020), low platelet counts (HR=0.876, P=0.037) and old age (HR=1.073, P=0.012) were independent risk factors for HCC development regardless of the treatment regimen. Receiver operator characteristics curve analysis revealed that an WFA+-M2BP level at SVR24 of ≥1.62 cut off index (COI) was the cut-off value for the prediction of HCC development (adjusted HR = 12.565, 95% CI 3.501-45.092, P<0.001). The 3- and 5-year cumulative incidences of HCC were 0.7% and 0.7% in patients with low WFA+-M2BP at SVR24 (<1.62 COI), and 4.8% and 12.4% in patients with high WFA+-M2BP (≥1.62 COI) were, respectively (P<0.001).Conclusion: The assessment of liver fibrosis using the WFA+-M2BP level at SVR24 is a useful predictor of HCC development after HCV eradication even in the IFN-free therapy era. Introduction Hepatitis C virus (HCV) infections represent an important global health problem leading to liver cirrhosis and hepatocellular carcinoma (HCC). At present, the World Health Organization estimates that 71 million people are chronically infected with HCV and approximately 400,000 people die every year from the complications of cirrhosis and HCC [1]. In Japan, it is estimated that 30,000 people died of HCC and 65% of all HCC deaths were due to chronic HCV infection [2]. Interferon (IFN)-based therapy, which was the standard treatment for chronic HCV infection until 2011, provided a sustained virologic response (SVR) in only 50% of the patients infected with HCV genotype 1, which was dominant in Japan. In addition, it was poorly tolerated due to adverse events, particularly in elderly patients or those with advanced stage disease. However, recent advances of IFN-free therapy with oral direct-acting antivirals (DAAs) dramatically improved the SVR rates and tolerability, and a large number of HCV patients currently achieved an SVR with this treatment. Previous studies have shown that the eradication of HCV not only reduces the incidence of HCC, but also improves all-cause mortality [3,4]. However, HCC development is not rarely observed even in patients who achieved an SVR. Indeed, the annual incidence of HCC among patients who achieved an SVR with IFN-based therapy ranges from 0.4 to 2% [4][5][6][7][8]. Therefore, it is important to identify the risk factors for HCC development after HCV eradication. Previously, we reported that the pretreatment Wisteria oribunda agglutinin-positive Mac-2-binding protein (WFA + -M2BP) level was a useful predictor of HCC development in patients who achieved an SVR with IFN-based therapy [9]. WFA + -M2BP was originally identi ed as a glycobiomarker of liver brosis, and the serum WFA + -M2BP level is reportedly signi cantly associated with histologically con rmed liver brosis in patients with chronic liver disease [10]. At present, WFA + -M2BP is generally used in Japan as one of the noninvasive biomarkers for the assessment of liver brosis. However, the change in the WFA + -M2BP level after HCV eradication and its association with HCC development after the achievement of an SVR with IFN-free therapy remains uncertain. The aim of this study was thus to determine the impact of WFA + -M2BP on the prediction of HCC development after HCV eradication. Patients A total of 522 patients who achieved an SVR with anti-viral therapy between March 2004 and December 2019 were enrolled in this study. All participants met the following inclusion and exclusion criteria: (1) presence of persistent HCV infection; (2) negativity for hepatitis B surface antigen or human immunode ciency virus; (3) negative history of other chronic liver diseases (autoimmune hepatitis, primary biliary cirrhosis, hemochromatosis, and Wilson's disease); (4) absence of HCC or any suspicious lesions detected on ultrasonography, dynamic computed tomography, or magnetic resonance imaging at enrollment; (5) negative history of previous treatment for HCC or liver transplantation; (6) followed-up period for ≥ 6 months after the end of treatment (EOT); and (7) absence of HCC development within 6 months after the EOT. This study's protocol was approved by the Juntendo University Shizuoka Hospital's Ethics Committee, and the study was performed in accordance with the 2013 revision of the Declaration of Helsinki. WFA-M2BP Measurement All routine laboratory data were collected immediately before treatment. The FIB-4 index was calculated as previously described [11]. Serum WFA + -M2BP levels were measured using pre-treatment serum samples stored at − 20°C. WFA + -M2BP quanti cation was performed usung a WFA-antibody immunoassay using a commercially available kit (HISCL M2BPGi; Sysmex Co., Kobe, Japan) and a fully automatic immunoanalyzer (HISCL-5000; Sysmex Co.). A SVR was de ned as negativity for serum HCV RNA at SVR24. Patient Follow-up Serum tumor markers and ultrasonography were performed at least once every 6 months during the follow-up period. The negativity of serum HCV-RNA was con rmed annually. HCC diagnosis was con rmed predominantly via imaging studies, including dynamic computed tomography and magnetic resonance imaging. When typical imaging features were absent, a ne-needle aspiration biopsy was performed. The follow-up period was terminated on December 31 2019. Statistical Analyses Categorical data were compared using the corrected chi-squared method. Continuous variables were analyzed using the Mann-Whitney U test. Factors associated with HCC development were determined using Cox proportional hazard models, and the HR and 95% CI were calculated. The cumulative incidence of HCC development was determined by the Kaplan-Meier method, and differences were tested using the log-rank test. P < 0.05 was considered statistically signi cant. All statistical analyses were performed using PASW Statistics 18 (IBM SPSS, Chicago, IL, USA). Patient Characteristics A total of 522 HCV patients who achieved an SVR were enrolled in this study; the clinical characteristics are summarized in Table 1 There was a greater number of patients with HCV genotype 1 infection, females and elderly patients, and the serum aspartate aminotransferase (AST) level, alanine aminotransferase (ALT) level and platelet counts were lower in the IFN-free therapy group than in the IFN-based therapy group. The WFA + -M2BP levels did not differ between the two groups at baseline and signi cantly decreased at SVR week 24 (SVR24) in both groups compared to baseline (P < 0.001). The albumin and platelet counts were signi cantly increased at SVR24, while the AST, ALT, and alpha-fetoprotein (AFP) levels were signi cantly decreased at SVR24 compared to baseline (P < 0.001). HCC Development After Achievement Of An SVR Among the 522 patients, 14 (3.4%) developed HCC during a median follow-up period of 2.9 years (range, 0.5-13.4 years). The estimated cumulative incidences of HCC development were 1.7% and 3.3% at 3 and 5 years, respectively (Fig. 1). Further, the cumulative incidence of HCC development did not signi cantly differ according to the treatment regimen (Fig. 2). The albumin levels (P = 0.004), and platelet counts (P = 0.002) were signi cantly lower and the AFP levels (P = 0.034), FIB-4 index (P = 0.003), WFA + -M2BP level at baseline (P = 0.002), and WFA + -M2BP level at SVR24 (P < 0.001) were signi cantly higher in those who developed HCC than in those who did not (Table 2). Discussion The present study aimed to determine the utility of the post-treatment WFA+-M2BP level in the prediction of HCC development after HCV eradication. Our ndings revealed that age, platelet counts, and the WFA + -M2BP level at SVR24 were useful predictors of HCC development after HCV eradication, regardless of the treatment regimen. Among these factors, both the platelet count and WFA + -M2BP level were previously found to be signi cantly associated with the severity of histological liver brosis [10,12]. In addition, age is a well-known surrogate marker of disease duration and is associated with more advanced brosis [13]. Several previous studies also showed that old age and advanced liver brosis were signi cant risk factors for HCC development [4,6,14,15]. Based on these results, the European Association for the Study of the Liver (EASL) recommends that patients with advanced brosis and cirrhosis who achieve an SVR should undergo surveillance for HCC every 6 months [16]. Our ndings con rmed the clinical importance of assessing the severity of liver brosis in the development of HCC after HCV eradication. Although liver biopsy has been recognized as the gold standard for the assessment of brosis, it can exhibit sampling variability and risk of lethal complications such as liver bleeding. Therefore, noninvasive markers such as the WFA + -M2BP level are important and useful for assessing the severity of liver brosis. We previously reported that the pre-treatment WFA + -M2BP level was a useful predictor of HCC development in patients who achieved an SVR by IFN-based therapy [9]. However, the present study demonstrated that WFA + -M2BP level at SVR24 was more useful in predicting HCC development than pre-treatment WFA + -M2BP level. In our previous report, we showed that the WFA + -M2BP level is affected by necroin ammatory activity in the liver [9]. In this study, the WFA + -M2BP levels signi cantly decreased after HCV eradication. These results suggest that the WFA + -M2BP level at SVR24 is more useful than the pre-treatment level for assessing liver brosis and predicting HCC development. Another nding of the present study was that the incidence of HCC development after HCV eradication was comparable between IFN-based therapy and IFN-free therapy. Initially, some studies reported that the incidence of HCC development after HCV eradication was unexpectedly high despite the lack of long-term follow-up [17,18]. However, several recent studies have revealed that the incidence of HCC development did not signi cantly differ between IFN-based therapy and IFN-free therapy, and this phenomenon can be explained by patient characteristics such as age and liver function [19,20]. In our study, there were more elderly patients, females, patients with HCV genotype 1 infection, and patients with low platelet count, who were considered to be resistant to previous IFN-based therapy, in the IFN-free therapy group relative to the IFN-based therapy group. The present study has several limitations. First, the incidence of HCC development was low (9 of 228 patients treated with IFN-based therapy and 5 of 294 patients treated with IFN-free therapy) because our study was retrospectively performed in a single center. Second, the observation period was relatively short in patients treated with IFN-free therapy; the median observation period was only 3 years in the IFN-free therapy group, compared to 5.3 years in the IFN-based therapy group. Third, the patient background characteristics that might affect HCC development differed between patients treated with IFN-based therapy and IFN-free therapy. Therefore, a large-scale prospective study is required to validate our study ndings. In summary, the incidence of HCC after HCV eradication is comparable between IFN-based therapy and IFN-free therapy. The WFA + -M2BP level at SVR24 is a useful predictor of HCC development after HCV eradication, regardless of the treatment regimen. Our results suggested that it was important to assess liver brosis using the WFA + -M2BP level at SVR24 for prediction of HCC development after achieving an SVR.
2021-05-05T00:09:39.559Z
2021-03-11T00:00:00.000
{ "year": 2021, "sha1": "20529a41b591f18e8a0e856a36f04f7693d919f4", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-276843/v1.pdf?c=1631892948000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "d305bbed74be2cd5450d5bed2dade0f7217f20ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233330937
pes2o/s2orc
v3-fos-license
Antiviral Activities of a Medicinal Plant Extract Against Sacbrood Virus in Honeybees Background Sacbrood is an infectious disease of the honey bee caused by Scbrood virus (SBV) which belongs to the family Iflaviridae and is especially lethal for Asian honeybee Apis cerana. Chinese Sacbrood virus (CSBV) is a geographic strain of SBV. Currently, there is a lack of an effective antiviral agent for controlling CSBV infection in honey bees. Methods Here, we explored the antiviral effect of a Chinese medicinal herb Radix isatidis on CSBV infection in A. cerana by inoculating the 3rd instar larvae with purified CSBV and treating the infected bee larvae with R. isatidis extract at the same time. The growth, development, and survival of larvae between the control and treatment groups were compared. The CSBV copy number at the 4th instar, 5th instar, and 6th instar larvae was measured by the absolute quantification PCR method. Results Bioassays revealed that R. isatidis extract significantly inhibited the replication of CSBV, mitigated the impacts of CSBV on larval growth and development, reduced the mortality of CSBV-infected A. cerana larvae, and modulated the expression of immune transcripts in infected bees. Conclusion Although the mechanism underlying the inhibition of CSBV replication by the medicine plant will require further investigation, this study demonstrated the antiviral activity of R. isatidis extract and provides a potential strategy for controlling SBV infection in honey bees. Introduction The Eastern honeybee (Apis cerana) is an important pollinator for crops and wild plants in Southeast Asia [31]. Compared to its close cousin European honeybee Apis mellifera which is the most widely managed crop pollinator worldwide, A. cerana has several advantages over A. mellifera, including the resistance to the parasitic mite Varroa destructor which is the most devastating pest of European honeybees, tolerance to low temperatures, and ability to utilize sporadic nectar sources in mountain and forest regions [14,15,30,33]. However, the health of the Asian honeybee is seriously threatened by Sacbrood virus (SBV). SBV or Morator aetatulas, is an infectious virus belonging to the family of Iflaviridae and infects larvae of both European and Asian honeybees. The infected larvae fail to reach the pupal stage and die eventually. SBV was first detected in A. mellifera in the United States in 1913 [41] and has subsequently been reported in all major world regions where beekeeping practices are present [1,9]. While SBV disease has been reported to affect about 15% of A. mellifera [27], it causes the most deadly and devastating disease in A. cerana. Historically, the catastrophic outbreak of SBV disease resulted in 95-100% mortality of A. cerana colonies in Thailand, Korea, China, and India [2,11,27,32,40]. SBV has evolved into multiple strains based on different geographical distribution. Chinese Sacbrood virus (CSBV) is a geographic strain of SBV infecting Chinese honeybee A. cerana [24]. CSBV primarily infects the 2nd to 3rd instar larvae of honeybees [23], resulting in failure to pupate and death, and eventually collapse of the whole colony [5]. CSBV was first found in A. cerana of Guangdong province in 1972 in China, and then spread rapidly to other regions of China and Southeast Asia and has been regarded as a major threat to A. cerana colonies [24]. So far, there still no effective treatment for CSBV infection. While CSBV infection can be partially relieved by replacing the queen or removing the infected combs from the beehives, such strategies are not effective ways to prevent further dissemination of CSBV among honeybees. RNAi has emerged as a potential method for combating viral diseases in honeybees [6]. Zhang et al. [44] reported that CSBV was significantly inhibited when honeybee larvae were fed with dsRNA corresponding to CSBV major capsid protein VP1 and RNAi-based treatment protected bee larvae from CSBV infection under laboratory conditions. However, the use of RNAi in honeybee disease control has been limited due to its high cost [39] and off-target effects [28], highlighting the need to develop new effective treatments for controlling CSBV infection in honeybees. Over the years, natural products from plants that possess active ingredients and safety characteristics provide a rich source of candidate treatments for bee and hive health and show potential to be effective agents against bee pathogens, including viruses [29,36]. Traditional Chinese herbal medicines display remarkable antiviral effects and have been widely used in the prevention and treatment of viral infectious diseases in humans and other animals [20]. We were motivated to explore the antiviral activity of a Chinese herbal medicine Radix isatidis (Banlangen and Daqingye in Chinese) for controlling CSBV infection in honeybees. R. isatidis is a commonly used traditional Chinese medicine famous for its broadspectrum activity against various pathogens including human and avian influenza viruses [8,43]. In this study, we provide evidence that R. isatidis extract could effectively inhibit the replication of CSBV in A. ceranae larvae, improve the immune response and extend the lifespan of CSBV infection larvae, clearly demonstrating an effective medicine for protecting honeybees from SBV infection. Ethics Statement Studies involved the Asian honeybee (Apis cerana), which is neither an endangered nor a protected species. Observations were made at the Institute of Apicultural Research, Chinese Academy of Agricultural Sciences (IAR-CAAS), Beijing, China. The apiary is the property of the IAR-CAAS and is not privately owned or protected in any way. No specific permits were required for the studies described. Apis cerana larvae samples Honeybee (A. cerana) colonies used in the study were originated from an experimental apiary maintained at the Institute of Apicultural Research, Chinese Academy of Agricultural Sciences, Beijing, China. In order to obtain the 2nd instar larvae, the queen from a health colony was restricted on a comb to lay eggs for 12 hours. After 48 hours, the comb with the 2nd instar larvae was taken out from the colony. The 2nd instar larvae were then transferred into 24-well plates individually. The 24-well plates were put into an incubator that was set at 32 ± 1 °C and 75 ± 5% relative humidity. The larvae were fed with manmade larval food and replaced with new diet each day. Detailed information on larval food used in the study is shown in Table 1. A diagram of the experimental design is shown in Fig. 1, and a thorough description of the experimental procedures is followed in the subsequent sections. CSBV Purification For purification of CSBV, infected larvae with significant disease symptoms were collected from field colonies. The presence of CSBV in infected larvae was confirmed by RT-PCR based on the description of Chen et al. [10]. CSBV-infected larvae (N = 200) were divided into two groups and homogenized in a 5 ml sterile phosphate buffer solution (PBS) separately with a sterile grinder. The homogenized mixture was centrifuged at 8000 rpm at 4 °C for 30 min. The supernatant was passed through a 0.20 μm cell filter to remove tissue debris and bacteria suspended in the solution. The collected CSBV solution was further purified through CsCl gradient centrifugations [19]. The CsCl was removed by dialysis against PBS, and CSBV purification was stored at −4 °C for the subsequent inoculation. RNA Extraction and PCR Amplification Total RNA was extracted from CSBV infected larvae using an RNeasy Mini Kit (Adlai, Beijing, China) according to the manufacturer's instructions. The cDNA was synthesized using a reverse transcription kit (Takara, Tokyo, Japan). PCR amplification was performed under the following conditions: initial denaturation at 94 °C for 5 min followed by 35 cycles of denaturation at 94 °C for 30 s, annealing at 58 °C for 30 s, extension at 72 °C for 15 s, and a final extension at 72 °C for 5 min. Then, a 593-bp fragment of the CSBV genome was amplified with primers described by Ma et al. [24]. The size of the PCR products was verified by electrophoresis on 1% agarose gel in 1 × TAE buffer. The primer specificity of the purified PCR products was confirmed by sequencing analysis. In addition, PCR assays were performed for RNA extracted from CSBV infected larvae to exclude the presence of the other common bee viruses, including Acute bee paralysis virus (ABPV); Black queen cell virus (BQCV), Chronic bee paralysis virus (CBPV), Deformed wing virus (DWV), Israeli acute paralysis virus (IAPV), and Kashmir bee virus (KBV) following methods described in references [3,4,26,34,37,38]. Determination of CSBV Concentration The concentration of CSBV purification described above was determined by absolute quantitative Polymerase Chain Reaction (qPCR) using the standard curve method. The forward and reverse primers (5′-ccttggagtttgctatttacg-3′ and 5′-cctacatccttgggtcag-3′) were used to amplify a 161 bp CSBV fragment. The qPCR was carried out in BIOER LineGene 9600 real-time PCR system [17,18]. The qPCR reaction mixture contained a total of 15 μl reaction mixture with 0.3 μl each of forward and reverse primers, 7.5 μl SYBR, 1 μl cDNA template and 5.9 μl water. The PCR reaction began with a single cycle at 95 °C for 3 min followed by 40 cycles at 95 °C for 3 s, 60 °C for 1 min and 70 °C for 30 s. The amplified PCR products of CSBV were purified and inserted into a plasmid vector pMD-18 T (Takara, Japan) to generate recombinant plasmid DNA. A standard curve for a dilution series of recombinant CSBV plasmid DNA ranging from 100 to 109 genomic copies was established by plotting CT values vs. the log of the concentration of genome copies. The solution of CBSV purification was determined to contain 6.74 × 104 CSBV copies per microliter and defined as initial concentration. CSBV Inoculum The 3rd instar (3-day-old) larvae that were reared in 24-well culture plates were divided into three groups by adding 5 µl of 3.37×10 4 , 1.685×10 4 or 6.74×10 3 CSBV copies/µl into 20 µl larval food (Table 1), respectively. At the same time, the larvae in the control group received the regular larval diet (Table1). Each group contained three 24-well culture places (N = 72 larvae). The plates were placed into an incubator (32 °C with humidity of 75%). According to the larval mortality results, group II inoculated with 1.685×10 4 CSBV copies/µl larval diet resulted in close to 50% lethal rate (LD 50 ) and therefore the viral concentration of 1.685×10 4 CSBV copies/µl was selected for the subsequent evaluation of in vitro antiviral activity of R. isatidis extract against CSBV. Radix isatidis Extract Preparation The roots and leaves of Radix isatidis (I. indingtica Fort.) were purchased from Beijing Hongda Kelai Biotechnology Co., Ltd. The equal quantity of extract powder of R. isatidis roots and leaves (1:1 ratio) was mixed in highpurity water at a concentration of 22.9 mg/ml, which was used as a stock solution. The stock solution was diluted with the larval diet (Table 1) into the final concentrations of 0.2 mg/ml, 0.32 mg/ml, and 0.43 mg/ml individually. Based on our pilot toxicity evaluation, 15 µl of R. isatidis extract at a concentration of 0.32 mg/ml was the most suitable dose to use for treatment as there was no significant difference in survivorship between the control group and the treatment group and was therefore chosen for the subsequent antiviral bioassays. Bioassay of the CSBV Inhibition with R. isatidis Extract The 3rd instar (3-day-old) larvae that were reared in 24-well culture plates were divided into three groups: Group I (Negative control <NC> , fed with a regular diet without CSBV and R. isatidis extract); Group II (CSBV, inoculated with CSBV without R. isatidis extract); and Group III-(CSBV&V R. isatidis extract, inoculated with CSBV and treated with R. isatidis extract). Each group contained three 24-well culture places (one plate was used for morphological study, one plate for assessing R. isatidis extract antiviral activity, and one plate for monitoring immune responses), making up biological replicates of twenty-four (N=24). In Group-II, each larva was fed with larval diet containing 5 µl of 1.685×10 4 CSBV copies/µl larval diet while in Group-III, each larva was fed with diet containing both 5 µl of 1.685×10 4 CSBV copies/µl larval diet and 15 µl of R. isatidis extract (0.32 mg/ml). The volume of the diet increased each day as larvae instar increased (Table 1) and the larval food was changed every day. For group II and III, the virus-containing food was replaced with regular larval food after 24 hours of inoculating with CSBV. For group III, the R. isatidis extract was provided to larvae from the 3 rd instar (3-day-old) to the 6 th instar (6-day-old). During the feeding process, the food was ensured to be kept on the bottom side of the culture plate to avoid contact with the larvae. To evaluate the impact of R. isatidis extract on the development and survival of the CSBV infected larvae, the larval development in terms of morphology was observed under a stereomicroscope in a rapid manner to avoid the disturbance of the bright light to the developing larvae and recorded daily. The dead larvae were recorded and removed daily. The larval survivor rate between the 4th instar (4-day-old) and the 6th instar (6-day-old) among different groups was recorded and compared. To assess the antiviral activity of R. isatidis extract against CSBV, five larvae were sampled daily for each group for three days post CSBV inoculation and R. isatidis extract treatment. The CSBV copy number at the 4th instar, 5th instar, and 6th instar larvae was measured by the absolute quantification PCR method as described above and compared among three different experimental groups. To monitor immune responses of CSBV infected larvae during R. isatidis extract treatment, eight larvae were sampled daily for each group for three days post CSBV inoculation and R. isatidis extract treatment. The expression of four genes encoding antimicrobial peptides apidaecin, abaecin, hymenoptaecin and defensin at the 4th instar, 5th instar, and 6th instar larvae was measured and compared among three different experimental groups by relative quantification PCR method (2−ΔΔCt method) [35]. The primers of immune genes and housekeeping gene β-actin were described by Liu et al. [22] and Chaimanee et al. [7]. The PCR reaction was carried out using a BIOER LineGene 9600 real-time PCR system. The qPCR system contained a total of 20 μl with 0.8 μl each of primer, 10 μl SYBR, 1 μl cDNA template and 7.4 μl water. The PCR reaction began with a single cycle at 95℃ for 3 min, 35 cycles of 95℃ for 30 s, 60℃ for 30 s, 72℃ for 30 s. qPCR data analysis was followed. The qPCR data analysis followed the procedure described in Liu et al. [22]. Statistical Analysis The standard curve method was employed for the absolute quantification of CSBV. The relative expression level of the antibacterial peptide target gene was calculated by 2 −△△CT method. The results were expressed as mean ± standard deviation (SD). The one-way analysis of variance (ANOVA) and Tukey's Honestly Significant Difference (HSD) test were used to compare the difference in the copy number of CSBV, survivor rate, and abundance of immune transcripts among three different groups using SPSS 22.0 (SPSS, Chicago, Illinois, USA). Percentage data (survivor rate) were Arc Sine transformed before the statistical analysis. A p-value of ≤ 0.05 was regarded as statistically significant. R. isatidis extract could reverse the effects of CSBV on larval growth and development The comparison of the morphology of larvae across three different groups showed that CSBV could severely impact the growth and development of A. cerana larvae and that R. isatidis extract was able to reverse the negative effects of CSBV on larval growth and development. Of 24 larvae in each experimental group, 100% of larvae in Group-I displayed normal development, 60.92% and 39.98% of larvae in Group-II were arrested at the fourth and fifth instar, respectively, without further development. Meanwhile, 86.15% of larvae in Group-II showed normal development and only 4.76% and 12.26% of larvae were arrested at the fourth and the fifth instar, respectively. A representative larvae morphological development is shown in Fig. 2. In Group-I, healthy larvae were pearly white and curved into a C-shape. The size of larvae increased significantly during each successive number of larval instars starting at the 4th instar larval stage and the body color of larvae turned into light yellow once they reached to the 6th instar. Compared to larvae in Group-I, CSBV infected larvae in Group-II showed a severe delay in development. The size of the CSBV infected larvae in Group-II was significantly smaller than that of larvae in Group-I. In addition, the color of CSBV infected larvae became dark brown, and more food is left on the bottom of the plate. Meanwhile, the CSBV infected larvae treated with R. isatidis extract in Group-III displayed a similar growth and development as larvae in Group-I. There was no significant difference in overall larval morphology and development between Group-I and Group-III (Fig. 2). Radix isatidis Extract Could Inhibit the Replication of CSBV There was a statistically significant difference in the copy number of CSBV between Group II and Group III at different instar (4th instar: p < 0.01, 5th instar: p < 0.01, and 6th instar: p < 0.01, t-test). While there was no detectable level of CSBV in Group-I (N.C.), the copy number of CSBV in Group-II was 1.21 × 10 5 copies/μl, 4.71 × 10 4 copies/µl, and 2.328 × 10 4 copies/µl in the 4-day-old, 5-day-old, and 6-day-old larvae, respectively. Compared to Group-II, a substantial decrease in CSBV copy number was observed in Group-III 24 h after R. isatidis extract treatment. The CSBV copy number in Group-III was found to continue to decrease steadily in response to the treatment of R. isatidis extract for 72 h. The CSBV copy number in Group-III larvae was 1.35 × 10 3 copies/μl, 1.91 × 10 2 copies/μl and 2.32 × 10 2 copies/μl in the 4-dayold, 5-day-old, and 6-day-old larvae, respectively, clearly indicating the inhibitory activity of R. isatidis extract against CSBV in vivo (Fig. 3). Radix isatidis Extract Could Extend the Lifespan of CSBV-Infected Larvae As shown in Fig. 4a, b, CSBV infection has a significant impact on larval survivorship. Larvae in Group-II displayed the highest mortality during the period of observation and at each instar stage among three groups. While the survival rate at Group-I was 98.61%, 97.16% and 98.61% for 4-day-old, 5-day-old, and 6-day-old larvae, respectively, the survival rate in Group-II was 75%, 77.3%, and 79.08% for 4-day-old, 5-day-old, and 6-dayold larvae, respectively. However, the survivorship of CSBV-infected bees was significantly improved by applying R. isatidis extract. The survival rate in Group-III was 97.22%, 100%, and 92.93% for 4-day-old, 5-day-old, and 6-day-old larvae, respectively. The overall survivorship during a period of observation was 98.61%, 43.05%, and 93.05% for Group-I, Group-II, and Group-III, respectively, clearly indicating that R. isatidis extract could result in a significantly improved survival of CSBV infected larvae. One-way ANOVA and Tukey's Honestly Significant Difference tests of the arcsine transformation of percentage data showed there was statistically significant difference in survivor rates among different experiment group. isatidis extract. In Group-II, larvae were inoculated with CSBV without treatment of R. isatidis extract. In Group III, larvae were inoculated with CSBV and treated with R. isatidis extract. In Group-I and Group-III, the size of larvae increased significantly during each successive number of larval instars starting at the 4 th instar stage and ending at the 6 th instar stage. The CSBV infected larvae in Group-II showed impaired growth and development. Scale bar = 1 mm The survivor rate in Group II was statistically significantly lower than Group I. The R. isatidis extract treatment could improve the survivor of CSBV infected larvae as there was no statistically significant difference in survivor rate between the Group I and Group II at three different instar larvae (4th instar: P = 0.008, F(2,6) = 11.704, G-I vs. G-II P = 0.012, G-II vs. G-III P = 0.016, G-I vs. (Fig. 4a, b). Relative Expression of Four Antimicrobial Peptides Relatve gene expression analysis showed that the expression levels of genes encoding antimicrobial peptides including apidaecin, abaecin, hymenoptaecin and defensin was activated in CSBV infected larvae (Group II) at different larval instar stages. One-way ANOVA and Tukey's Honestly Significant Difference tests showed the expression levels of four immune genes were significantly higher in Group-II than that in Group-I (Abaecin-4th instar: P = 0.000, F(2,6) = 62.091, G-I vs. R. isatidis extract inhibited CSBV replication, which in turn led to the reduction in the intensity of the immune response in honey bee larvae. The relative expression levels of genes encoding apidaecin, abaecin, hymenoptaecin and defensin in Group III larvae was significantly lower than that in Group II, end an immune response. However, the immune response didn't disappear completely after the treatment of R. isatidis extract as there was still significant difference in the relative expression levels of genes encoding apidaecin, abaecin, hymenoptaecin and defensin between Group I and Group III (Abaecin-4th Fig. 3 Inhibitory effects of R. isatidis extract on CSBV replication. In Group-II, larvae were inoculated with CSBV without treatment of R. isatidis extract. In Group III, larvae were inoculated with CSBV and treated with R. isatidis extract. Absolute RT-qPCR measurement of CSBV gene copy number was conducted on the 4 th instar, 5 th instar, and 6 th instar larvae 24 h post CSBV inoculation for both Group-II and Group-III. Two asterisks (**) above denote a statistically significant difference between the two groups (P ≤ 0.01, Student's t-test) instar: G-II vs. G-III P = 0.001, G-I vs. G-III P = 0.040, 5th instar: G-II vs. G-III P = 0.000, G-I vs. G-III P = 0.032, 6th instar: G-II vs. G-III P = 0.000, G-I vs. G-III P = 0.001; Apidaecin-4th instar: G-II vs. G-III P = 0.001, G-I vs. G-III P = 0.802, 5th instar: G-II vs. G-III P = 0.000, G-I vs. G-III P = 0.000, 6th instar: G-II vs. G-III P = 0.000, G-I vs. G-III P = 0.228; Hymenoptaecin-4th instar: G-II vs. G-III P = 0.0001, G-I vs. G-III P = 0.0001, 5th instar: G-II vs. G-III P = 0.0001, G-I vs. G-III P = 0.0001, 6th instar: G-II vs. G-I P = 0.0001, G-I vs. G-III P = 0.002; Defensin-4th instar: G-II vs. G-III P = 0.931, G-I vs. G-III P = 0.073, 5th instar: G-II vs. G-III P = 0.001, G-I vs. G-III P = 0.042, 6th instar: G-II vs. G-III P = 0.0001, G-I vs. G-III P = 0.0001) (Fig. 5). Except for defensin, the other three immune genes had their peak of expression 24 h post CSBV infection and then declined thereafter. This inducible innate immune response to CSBV infection subsided with the treatment of R. isatidis extract. Except for the expression of defensin at the 4 th instar larvae, the expression levels of apidaecin, abaecin, and hymenoptaecin in Group-III expression were significantly lower than that in Group-II. The fold change in the gene expression levels between larvae in Group-II and larvae in Group-III was more than ten folds (Fig. 5). Discussion Due to natural products' attractive properties such as safe, non-toxic, and biodegradable, they have been a rich source of medicines against various diseases including viral diseases. In this report, we provided evidence that the extract of a Chinese medicinal plant R. isatidis could inhibit honey bee SBV replication, modulate honey bees' immune responses, and restore honey bees' viability from SBV disease challenge, adding a new dimension to the role of the herb medicines in disease treatment and management. Chinese sacbrood virus (CSBV) is the leading cause of A. cerana colony mortality, necessitating effective treatments that are safe, efficacious, and cost-effective. Herbal products have been used in traditional Chinese medicine for centuries. Previous studies have shown that Chinese herb medicines have unique roles in blocking viral replication or exerting direct or indirect antiviral effects [20]. R. isatidis (Ban-Lan-Gen) is a traditional Chinese herbal medicine that has been used for the prevention and treatment against a wide range of diseases, including viral diseases (reviewed in Zhou 2012). Several biologically active compounds have been isolated from R. isatidis and shown to have antioxidant and antiviral properties. For example, indirubin, a main active ingredient of R. isatidis, was reported to have potent antiviral and anti-inflammatory effects via inhibition of RANTES, is a member of a large family of cytokines that play a regulatory role in inflammatory processes [25,27]. In addition, R. isatidis polysaccharides were found to inhibit the replication of human and avian influenza viruses [21]. Furthermore, Clemastanin B, and epigoitrin which are major phenylpropanoid compounds and abundant alkaloid in R. Isatidis, respectively, could effectively inhibit human and avian influenza viruses by blocking virus attachment and inhibiting virus multiplication [42,43]. In our study, the dosage of 48 ug R. isatidis extract per larva each day (15 µL of 0.32 mg/ mL R. isatidis extract) did not lead to toxic effects, indicating that R. isatidis is of safe and non-toxic for honey bees. Our results that CSBV load of the Group-III treated with R. isatidis extract was significantly lower than the virus control Group-II and that development and survival rate of Group-III was significantly higher than that of Group II demonstrated the significant antiviral activity The overall survivor rate during a period of observation of different groups. In Group-I (CK), larvae received neither CSBV no R. isatidis extract. In Group-II, larvae were inoculated with CSBV without treatment of R. isatidis extract. In Group III, larvae were inoculated with CSBV and treated with R. isatidis extract. The different lower case letters above bars indicate the statistically significant difference among different groups (P ≤ 0.05, ANOVA and Tukey´s tests) of R. isatidis against lethal infections of CSBV. The results encourage future evaluation of R. isatidis extract as an antiviral agent for the treatment of other viruses in honey bees. Future studies are also needed to identify, isolate, and characterize specific active ingredients of R. isatidis that are responsible for inhibiting CSBV. Innate immunity is the first line of defense against invading microorganisms in insects and consists of cellular and humoral responses [16]. Humoral response refers to the activation of downstream intracellular signaling molecules by germline-encoded pattern recognition receptors that recognize pathogen-associated molecular patterns and the production of soluble effector molecules, antimicrobial peptides (AMPs), in response to invaders. Several AMPs, including apidaecin, hymenoptera, abaecin and defensin which are regulated by two intracellular signaling pathways Toll and Imd/JNK have been described in the honey bee [12,13]. During viral infection, the rapid production of AMPs as a part of the host defense response is necessary to promote virus clearance and to prevent virus spread within the host. Our study showed that CSBV infection induced the rapid elevation of expression levels of the AMPs apidaecin, hymenoptera, abaecin and defensin, reflecting that honey bee host's innate immunity acted quickly to mount a first line of defense. The significant reduction in virus titer after the treatment with R. isatidis extract was accomplished with substantially subsided host immune responses as shown that the expression levels of four AMPs in Group-III expression were over ten-fold lower than that in Group-II. This result clearly demonstrated the immunomodulatory roles of the herbal extract. However, more research is needed to better understand the mechanism of R. isatidis in the protection against CSBV replication and the modulation of the innate immune response in the future. Conclusion In conclusion, our findings clearly demonstrate that R. isatidis can be a significant antiviral therapeutic agent to inhibit CSBV infection in honey bees. The results obtained from this study may serve as a basis for further exploration , hymenoptaecin (c) and defensin (d). For each gene, the relative expression was expressed as an n-fold difference relative to the calibrator (marked by a star) by 2 -∆∆Ct method. In Group-I (CK), larvae received neither CSBV no R. isatidis extract. In Group-II, larvae were inoculated with CSBV without treatment of R. isatidis extract. In Group III, larvae were inoculated with CSBV and treated with R. isatidis extract. The different lower case letters above bars indicate the statistically significant difference among different groups (P ≤ 0.05, ANOVA and Tukey´s tests) of herbal medicinal plants or substances derived from them for the discovery and production of novel antiviral drugs for disease treatment in honey bees.
2021-04-22T13:53:47.128Z
2021-04-21T00:00:00.000
{ "year": 2021, "sha1": "14a416ef92b73e3d0e788d9e01d34a8ca5b75df9", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-021-01550-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fff98bf04ab57071ac134252c549b659e9a1ef1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2988721
pes2o/s2orc
v3-fos-license
Discovery of two new species of Crotalaria (Leguminosae, Crotalarieae) from Western Ghats, India Two new species of Fabaceae-Papilionoideae are described and illustrated. Crotalaria suffruticosa from Karul Ghat region of Maharashtra is morphologically close to C. albida and C. epunctata. C. multibracteata from Panhala region of Maharashtra resembles C. vestita. C. suffruticosa differs from C. albida and C. epunctata in its habit, leaf, inflorescence, callosity, keel type, stigma, style morphology and number of seeds/pod. To test if the new species differ from their morphologically most similar species, we measured various traits and performed a Principal Component Analysis (PCA). This analysis shows that the new species differs from similar species in gross morphology for several diagnostic traits and showed correlations between the variables or distance among groups and estimated the contribution of each character. Phylogenetic analyses were also conducted based on nuclear (ITS) and plastid (matK) markers. The analyses revealed nucleotide differences between the new species and their close allies attributing to their distinctiveness. A map and key including all species of Crotalaria from Maharashtra state are provided. Conservation status of the two new species have also been assessed. Introduction The Crotalarieae (Benth.) Hutch., (Fabaceae) is the largest tribe in the genistoid alliance (containing 51% of genistoid legumes) and comprises 16 genera and 1204 species [1][2][3][4][5]. More than half of the diversity of the tribe belongs to the genus Crotalaria L., with 702 species [6][7]. Recent molecular work has provided profound insights into generic and specific relationships and better understanding of the group in the tribe Crotalarieae and the genus Crotalaria, thereby establishing the monophyly of the genus [3,[5][6]8]. An infrageneric classification of Crotalaria was attempted by Le Roux et al. [6], based on molecular phylogenetic data, which brought significant advances in the understanding of the infrageneric classification, redefined and complemented the previous classification given by Polhill in 1982 [9]. In India, the revisionary work on the genus Crotalaria was undertaken by Ansari [10] to which further data was incorporated by Subramaniam et al. [8]. The genus Crotalaria is distributed in tropical and sub-tropical regions of the world. The species of Crotalaria exhibits great diversity of habit and a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 publication of a PLOS ONE article are effectively published under that Code from the electronic edition alone, so there is no longer any need to provide printed copies. In addition, new names contained in this work have been submitted to IPNI, from where they will be made available to the Global Names Index. The IPNI LSIDs can be resolved and the associated information viewed through any standard web browser by appending the LSID contained in this publication to the prefix http://ipni.org/. The online version of this work is archived and available from the following digital repository: PubMed Central.and LOCKSS. Morphological observations The morphological analysis and description of the two new species are based on the examination of freshly collected and dry vouchers, in addition to flowers preserved in FAA (Formaldehyde-Glacial Acetic Acid-Alcohol). The flowers were rehydrated in water with detergent and dissected to examine the minute details of the corolla under binocular microscope Olympus SZ61. A detailed comparison with the measurements of selected traits and characteristic features of both species are presented in tabular form. Morphological terminology follows Harris and Harris [17] and Hickey and King [18] for vegetative characters, Hewson [19] for indumentum description, and Endress [20] for inflorescence morphology. Both of the new species were found on exposed forest edges, cut slopes, rocky slopes and grasslands. We critically compared the morphology of these specimens with the specimens of Crotalaria albida, C. epunctata and C. vestita housed in the herbarium at CAL, MH and DUH. We show that these new species differ from their morphologically most similar relatives by measuring various traits on herbarium specimens and personal fresh collections. The most closely related species were identified based on previous revisionary and systematic works [6][7]10]. In order to understand the morphological diversity of these species and to ascertain the distinctiveness of both of the new species, a range of specimens, including types as well as voucher specimens were examined from the following herbaria ASSAM, BSD, BSI, CAL, DUH, FRLH, M, MH, SJC, SKU [21]. Specimen images were also studied from the JSTOR Global Plants [22], China Virtual Herbarium [23], Flora of Pakistan [24] and other online herbaria (B, BM, BR, B-WILLD, E, FI, FOB, G-DC, K, L, LINN, NYBG, P, TUB). To visualize the geographical occurrence of the two new species and the others occurring in the same area, a distribution map was prepared, using a base map from WORLDCLIM [25] and political borders retrieved from Esri Data and Maps [26]. The details of the co-ordinates are presented as S2 Appendix in Supporting Information. Identification of closely related species. A summary of all the diagnostic characters of both the new species and its close allies are presented as tables. Mean trait values and standard error with the minimum and maximum values are provided in Tables 1 and 2. Taxa sampling The field surveys and plant collection trips were conducted in 2011 and 2015 in Kolhapur, Maharashtra. Voucher specimens are deposited in BSD (Botanical Survey of India, Dehradun, India) and DUH (Delhi University Herbarium, India). Of the total of 37 species occurring in Maharashtra, we collected 33 (92%). Of these 33 species collected, 40% are endemic. A total of 94 accessions for the ITS marker and 86 for the plastid marker matK (including outgroups) of which 72 accessions represent Indian species of Crotalaria [8,15]. Bolusia amboensis (Schinz) Harms and Euchlora hirsuta (Thunb.) Druce were included as outgroup for the analyses following the molecular study of Boatwright et al. [3]. Voucher details along with author citations and the GenBank accession numbers have been provided in the additional information S1 Appendix. ITS and matK sequences of outgroup taxa and the African Crotalaria species were retrieved from GenBank. Molecular methods Genomic DNA was extracted using a DNeasy plant mini kit (Qiagen, Amsterdam, The Netherlands). DNA amplification and sequencing of the ITS region was performed using the primers ITS 1 and ITS 2 [27]. The polymerase chain reaction (PCR) for the ITS region was performed with standard methods [8]. The matK region was amplified and sequenced as one segment using Barcoding primers of Jing-Yu et al. [28]. Reaction conditions for the matK region include denaturation at 94˚C for 3 min followed by 35 cycles of 1 min at 94˚C, 1 min at 52˚C and 1min at 72˚C, followed by a final extension at 72˚C for 5 min in an applied biosystems thermal cycler. PCR products were checked for the presence of appropriate bands on a 0.8% agarose gel, purified, and sequenced at SciGenome; Kochi Kerala, India. Sequences comprised of ITS1, 5.8S and ITS2 regions and matK s. For the matK region, forward and reverse sequence reads were using DNA Baser v.4.36 [29]. Consensus sequences for all accessions were imported into Clustal X [30] and MAFFT v.7 [31][32] in which the sequences were aligned followed by manual adjustments in Mesquite v.2.72 [33]. Gaps were treated as missing data. For the ITS region, chromatograms were using Sequencher (Gene Code corporation, USA) [34] and partial bases were converted to N's. A total of 105 nucleotide sequences (including all outgroups) for ITS and 113 nucleotide sequences (including all outgroups) for matK were. 98 sequences representing Indian accessions have been deposited in GenBank (S1 Appendix). Phylogenetic analyses. Independent phylogenetic analyses were conducted for ITS and matK regions. Both regions were concatenated using Mesquite v.2.72 [33]. The latter is a part Crotalaria suffruticosa Crotalaria albida Crotalaria epunctata Flower Length (mm) 9.8 +/-0.09 (n = 4) min = 9.5, max = 10 8.56 +/-0.08 (n = 4) min = 8.3, max = 8. of the non-recombining plastid genome and are frequently combined for phylogenetic reconstruction [35][36][37]. For the purpose of this study both regions were combined for analyses. No major conflicts (incongruence) were identified between single-region analyses, which showed broad similar phylogenetic groups. The best-fitting model of nucleotide substitution was selected using the Akaike information criterion [38] and implemented in the program jmo-delTest 0.1.1 [39][40]. It was found to be GTR+G, with the lowest AIC score and highest loglikelihood score for both the regions. Bayesian analysis was performed using MrBayes 3.1.2 [41]. Parameters for the evolutionary model were set to default and the state frequency parameter for stationary nucleotide frequency of the rate matrix was fixed. The number of chains was set to four with three heated and one cold chain. Two runs were executed in parallel. Analyses were run for 7,000,000 generations until stationarity (standard deviation below 0.01). In each run, trees were sampled every 100 generations with a sample frequency of 10. The parameters were summarized after excluding 25% of the samples (burn-in) based on the inspection of log-likelihoods of sampled trees after stationarity was reached. The Potential Scale Reduction Factor (a convergence diagnostic) approached 1.0 for all the parameters suggesting good sampling from the posterior probability distribution with no spread. Trees were summarized by the sumt burnin command yielding a cladogram showing posterior probabilities and clade credibility for each split and a phylogram with mean branch lengths (Fig 1). Maximum Crotalaria multibracteata Crotalaria vestita Flower length (mm) 7.21 +/-0.12 (n = 4) min = 6.9, max = 7. likelihood (ML) analyses were performed using RaxML v.1.3 [42]. The heuristics of RAxM-L-III belong to the class of algorithms, which optimize the likelihood of a starting tree already comprising all sequences. In contrast to other programs, RAxML-III starts by building an initial parsimony tree. For likelihood (ML) analyses, settings were ''ML+ thorough bootstrap" with 100 (replicate) runs and 1000 (bootstrap) repetitions with the GTR+G model (six general time-reversible substitution rates, assuming gamma rate heterogeneity). Trait measurements and statistical analyses To evaluate whether the two new species differed from the presumably closest relatives by understanding which traits were most relevant with regard to their identification, we performed a Principal Component Analysis (PCA), using Microsoft R Excel 2000 XLSTATC-Pro v.7.2 (Addinsoft, Inc., Brooklyn, New York) [43], and BioDiversity Pro v.2 [44] where the significance level was set at 5% (Figs 2 and 3). PCA is one of the many ways to analyse the structure of a given correlation matrix. The Principal Component Analysis method (PCA) may be useful in selecting from among the great number of morphometric characters, especially those that have some taxonomical value. Such a necessity is occurring within genera which species are very uniform in morphological structure and there are weak qualitative characters differentiating them [45]. The specimens observed for this study are listed in the section below as "specimens examined". The following traits were measured for each of the five species: Crotalaria albida-C. epunctata-C. suffruticosa and Crotalaria multibracteata-C. vestita flower length, flower width, standard length, standard width, wing length, wing width, keel length, keel width, calyx tube length, gynoecium length, gynoecium width, seed length, seed width, leaf length and leaf width. For each specimen, the mean values of the above traits were calculated (for example, mean corolla length of the three flowers of an individual). These means were then used to calculate the significant ratios. Phylogenetic analyses The DNA sequencing of the nuclear ITS region of the two new species generated a sequence length of 792 bp in Crotalaria multibracteata (GenBank numbers:KY321450, KY321451) and 790 bp in C. suffruticosa (GenBank numbers: KY321453, KY321454, KY321455, KY321456) and the matK region of the two new species generated a sequence with 784 bp in C. multibracteata and 782 bp in C. suffruticosa. The concatenation of both the sequences of the new species resulted in 1576 and 1572 bp sequence of Crotalaria multibracteata and C. suffruticosa respectively in the aligned matrix (without gaps). The complete aligned matrix comprised of 98 accessions containing 1660 characters. The phylogeny constructed under Maximum likelihood and Bayesian approach revealed broadly the same topology (Fig 1). The tree resolves into eight major clades representing eight sections out of the eleven sections as proposed by Le Roux et al. [6] for the genus Crotalaria. The eight clades corresponds to the following sections (as marked in the tree) viz., Calycinae, Crotalaria, Geniculatae, Grandifloirae, Glaucae, Stipulosae, Incanae, Hedriocarpae. All these major clades are well supported (clades for which the parsimony and likelihood support values are more than 80 BS and Bayesian posterior probability values are more than 0.9 pp), and are congruent with earlier phylogenetic analyses [8]. The phylogeny supports the status of the genus as monophyletic (100 BS/1.00 pp). Most of the Indian species (51 species in Calycinae clade) of Crotalaria forms a part of the Calycinae clade (100 BS/1.00 pp) which is congruent with the earlier phylogenetic work of Subramaniam et al. [8]. India hosts the maximum number species in the Calycinae clade which is mainly characterized by its simple leaved species (exception: C. orixensis). Within this simple leaved clade, Crotalaria suffruticosa forms a distinct sub-clade (100 BS/1.00 pp) with C. albida and C. epunctata (86 BS/1.00 pp) in the Calycinae clade. The latter two species are strongly supported as sister to one another (100 BS/ 0.99 pp). Crotalaria multibracteata is sister to C. vestita (99 BS/1.00 pp), together forming a clade which is sister to C. hirta and C. mysorensis albeit with low support. (0.71 pp). The phylogeny demonstrates the distinct status of the new species C. suffruticosa and C. multibracteata, and resolves their position in separate subclades within the Calycinae clade (Fig 1). The new species Crotalaria suffruticosa differs from C. albida and C. epunctata in 16 nucleotide substitutions, and one inversion at site 520 respectively (in both regions). It also differs from Crotalaria albida and C. epunctata in two insertions of length two and 12 at sites 520-521 and 895-906 respectively. Crotalaria suffruticosa shares similarity with C. albida and C. epunctata at sites 740 and 976 with two substitutions. Crotalaria multibracteata differs from C. vestita in five substitutions and one inversion at site 1564. It is similar to C. vestita in having eight substitutions and one insertion at site 264-265 respectively. Morphometric analyses Principal Component Analysis in the form of Pearson's co-efficient showed the significant characters which help in morphological differentiation between the new species and the species most similar in gross morphology (Figs 2 and 3; Tables 1 and 2). This morphometric analyses have been proved to be very useful in showing correlations between the variables or distance among groups and in estimating the contribution of each character. The significant characteristic ratios, which contributed to the uniqueness of the new species, are indicated in the Figs 2 and 3. The mean diagnostic sizes concentrated the new species from their close allies into different groups. Traits indicated close to the respective species, in the PCA plot are governed by that species to the maximum. Taxonomic treatment This addition of the new species in Western Ghats envisages an addition to the existing 36 species in the hotspot region of Indian sub-continent. Our group is investigating the biogeography of the genus, which in future will contribute to the understanding of the present-day Ascending or mainly erect suffruticose, branched herb, up to 0.5 m high, slightly woody from near the base. Stems terete, branches tomentose with white trichomes prominent on the younger branches. Leaves simple, alternate; petiole up to 0.1 cm long; lamina elliptic to oblanceolate, ca. 4.0 × 1.6 cm, base acute, apex acute or mucronulate, margins entire and ciliate, venation pinnate-brochidodromous, with white velutinous trichomes beneath, glabrescent above, exstipulate. Inflorescence a terminal raceme; peduncles up to 8 cm long bearing up to 7 flowers, axillary peduncles up to 4 cm long, with one to three flowers. Flowers ca. 1.0 × 0.7 cm across; bracts membranous, linear lanceolate, up to 3.0 mm long, with white silky pubescence; pedicels 0.2-0.3 cm long reflexed downwards; bracteole single, inserted on the pedicel, linear pubescent, up to 3 mm long, with silky pubescence and slightly involute margin. Calyx 5-lobed, bi-lipped, upper lip consisting of two sepals and lower lip, three sepals each, ca. 5.45 mm long, connate at base, tapering to apex, apex acute, hirsute; tube ca. 1.63 mm long, margins ciliate and slightly involute. Corolla yellow, exserted from the calyx; vexillum obovate-elliptic, ca. 8.90 × 6.90 mm, claw ca. 0.87 mm long, with paired planar callosities of ca. 1.14 × 0.52 mm at the base; trichomes ca. 0.45 mm long, on almost entire midvein and spreading towards upper portion of the petal; wing petal ca. 7.48 × 2.89 mm, multi-veined, claw ca. 0.9 mm long, cavae (sculpturing/ridges) Phenology. Flowering September to December, fruiting November to February. Distribution, habitat and ecology. Crotalaria suffruticosa grows on cut slopes, exposed forest edges and rocky slopes of the Karul Ghat region (Fig 5). Karul Ghat is a stretch of typical grassland and forest edges. The temperature has a relatively narrow range between 10˚C to 35˚C. Mean relative humidity in summer (March-May) is up to 65%, it is 87% during wet weather (June-October) and 63% in winters (November-February) [46]. Etymology. The species is named for its suffruticose habit. IUCN conservation status. Endangered (EN). The species is known only from two sites, the type locality and another adjacent area near the type locality. In accordance with two of the Two new species of Crotalaria from India IUCN [47][48] criteria, it should be best considered endangered, according to the preliminary investigations made because it meets criterion of points A-E of section V of IUCN. Species recognition. Crotalaria suffruticosa resembles its most closely related species C. albida and C. epunctata in having a pubescent stem surface, inflorescence being a terminal/ axillary raceme, yellow corolla, bilipped calyx with pubescent surface and ciliate margin, bracts and bracteoles with white silky pubescence, ovate-elliptic-oblong standard, gynoecium surface glabrous, brush type stigma, pod elliptic oblong with glabrous surface. It differs from C. albida and C. epunctata in habit, height, leaf size and margin, inflorescence length and number of flowers/inflorescence, bracteole position, standard apex, callosity type, keel shape, curvature and vestiture, style type and trichomes details of which are summarized below and in Table 3. Crotalaria suffruticosa has a stiff erect, suffruticose habit, up to 50 cm high (vs. up to 80 cm in C. albida and up to 1 m in C. epunctata), leaves up to 4 cm long with simple margin (vs. up to 5 cm in C. albida and up to 10 cm in C. epunctata, both with non-ciliate margins), inflorescence up to 8 cm in length (vs. up to 15 cm in C. albida and up to 28 cm in C. epunctata) and up to 7 flowers/ inflorescence (vs. up to 26 flowers/inflorescence in C. albida and up to 20 flowers/ inflorescence in C. epunctata), bracteoles present on the middle of the pedicel (vs. attached at the base of calyx in C. albida and C. epunctata), notched standard apex (vs. rounded standard apex in C. albida and C. epunctata), planar callosity (vs. ridge callosities in C. albida and C. epunctata), keel angled (vs. keel sub-angled in C. albida and C. epunctata), with lower third curvature (vs. below the middle in C. albida and C. epunctata) and ciliate glabrous vestiture (vs. lanate vestiture in C. albida and C. epunctata), style with hairs arranged in two rows (vs. hairs in one row) and subgeniculate (vs. geniculate). A comparative morphology of all the characters between Crotalaria suffruticosa and its close allies has been provided in Table 3. Based on the morphological and molecular evidences, the plant collected from Karul Ghat, Kolhapur, is best placed in Crotalaria sect. Calycinae based on keel curvature, twisted beak, calyx more than half as long as the keel to longer than the keel, often bilipped and leaves usually simple. The region is a stretch of typical grassland with rocky mountain slopes and forest edges. The temperature has a relatively narrow range between 10˚C to 35˚C. Mean relative humidity in summer (March-May) is up to 65%, it is 87% during wet weather (June-October) and 63% in winter (November-February) [46]. Etymology. The species is named for the multiple bracts (more than 4) present on the peduncle. IUCN conservation status. Endangered (EN). The species is known only from the type locality. In accordance to the IUCN two [47][48] criteria, it should be best considered endangered according to the preliminary investigations made because it meets criterion of points A-E of section V of IUCN: (a) A suspected population size reduction of !50% over the last ten years, based on potential levels of exploitation of limestone and coal mining and jhum cultivation, and (b) Extent of occurrence suspected to be less than 5,000 km 2 and known to exist in no more than five locations. Species recognition. Crotalaria multibracteata is a procumbent slender herb with branched and terete stems. The species resembles C. vestita in having bi-lipped corolla, lanceolate and non-sticky calyx with acute apex, emarginate standard apex, glabrous keel surface, alae absent, beak twisted up to 90˚, anther filament glabrous, gynoecium surface glabrous, style with two lined hairs, subgeniculate style curvature, pod surface glabrous, seed reniform and brown with glabrous surface (see Table 4). It differs from C. vestita in stem surface, leaf margin, petiole surface, calyx surface, extra bracts (more than 4) on peduncle, standard adaxial surface, keel shape, keel vestiture, number of seeds per pod and pod beak details of which are summarized below. C. vestita has densely cloth with yellow brown silky hairs (long and often bulbous based), stem surface whereas the newly discovered plant has velutinous stem with white hairs. Leaf margin is involute in case of C. vestita and simple in C. multibracteata. Petiole is less than 1 mm, densely hairy in C. vestita and sessile in C. multibracteata. Bract margin, bract shape and bracts on peduncle in C. vestita are non-ciliate, lanceolate or ovate-lanceolate, bracts equal to flower length compared to ciliate margins, linear and multiple bracts (more than 4) on the peduncle. Standard adaxial surface is glabrous in C. vestita and apex-pubescent in C. multibracteata. Keel shape and keel vestiture is angled and lanate vestiture (ciliate glabrous in C. vestita). Number of seeds per pod is 1-4 (vs.15-33 in C. vestita). Pod beak is absent in C. multibracteata (vs. present in C. vestita). A comparative morphology of all the characters between Crotalaria multibracteata and its close allies has been provided in Table 4. Based on the morphological and molecular evidences, the plant collected from Panhala, Kolhapur, is best placed in Crotalaria sect. Calycinae based on keel curvature, twisted beak, calyx more than half as long as the keel to longer than the keel, often bilipped and leaves usually simple.
2018-04-03T03:42:30.271Z
2018-02-15T00:00:00.000
{ "year": 2018, "sha1": "ce7f5f540dc98189ea38a90ec456c61ef9ca43dd", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0192226&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce7f5f540dc98189ea38a90ec456c61ef9ca43dd", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
11915457
pes2o/s2orc
v3-fos-license
Incentive Mechanism Design for Heterogeneous Peer-to-Peer Networks: A Stackelberg Game Approach With high scalability, high video streaming quality, and low bandwidth requirement, peer-to-peer (P2P) systems have become a popular way to exchange files and deliver multimedia content over the internet. However, current P2P systems are suffering from"free-riding"due to the peers' selfish nature. In this paper, we propose a credit-based incentive mechanism to encourage peers to cooperate with each other in a heterogeneous network consisting of wired and wireless peers. The proposed mechanism can provide differentiated service to peers with different credits through biased resource allocation. A Stackelberg game is formulated to obtain the optimal pricing and purchasing strategies, which can jointly maximize the revenue of the uploader and the utilities of the downloaders. In particular, peers' heterogeneity and selfish nature are taken into consideration when designing the utility functions for the Stackelberg game. It is shown that the proposed resource allocation scheme is effective in providing service differentiation for peers and stimulating them to make contribution to the P2P streaming system. I. INTRODUCTION With the rapid development of peer-to-peer (P2P) communication technologies, P2P networks have become a popular way to exchange files and deliver multimedia content over the internet due to their low bandwidth requirement, good video streaming quality, and high flexibility. However, current P2P systems greatly rely on volitionary resource contribution from individual peers and do not enforce any compulsory contribution from these peers. This directly leads to the well-known "free-riding" problem, which refers to the phenomenon that a peer consumes free service provided by other peers without contributing any its resources to the P2P network. This tremendously degrades the performance of P2P systems, especially P2P multimedia streaming systems which have high requirements on time delay and data rate. Free-riding is common in P2P networks due to peers' selfish nature and the limited network resources. Most peers only want to maximize their own benefits without caring about the overall performance of the whole P2P community. It is reported in [2] that more than 70% P2P users do not share any file in Gnutella system. Therefore, to enhance the performance of P2P networks, effective incentive mechanisms need to be put in place to stimulate the cooperation between peers and encourage them to make contribution to the P2P system. On the other hand, recent advances in wireless communications technologies (3G/4G networks) and smart phones have enabled the development of mobile version of P2P applications for smart phones, such as PPtv [3] and PPStream [4]. People use these mobile P2P applications to watch movies, watch dramas, or listen to music when traveling on buses and metros. Due to the convenience, mobile P2P users are increasing dramatically nowadays. As compared to the wired P2P users, mobile P2P users are more selfish due to the high cost of mobile data. Thus, there is also a compelling need to design effective incentive mechanisms for mobile P2P applications. The existing incentive mechanisms for P2P systems are mainly designed to work in wired networks. For the heterogeneous networks with both wired and wireless nodes, these incentive mechanisms may not work well due to the differences between the wired nodes and the wireless nodes. For example, the computing capability of the wireless nodes (such as smart phones and tablet PCs) is usually weaker than that of the wired nodes (such as desktop PCs, and workstations). Thus, incentive mechanisms with high complexity may not be suitable for mobile applications. It is true that there exist high-end smartphones with high-end four-core or eight-core processors. However, incentive mechanisms with high complexity are still not preferred on these mobile devices since the high complexity computing can drain out the devices' batteries fast. In addition, the connection bandwidth of the wireless nodes is usually less than that of the wired nodes. This should be taken into consideration when designing the incentive mechanism to achieve relative fairness. However, to the best of our knowledge, most of the existing work fails to do this. All these differences between the wireless and wired nodes pose new challenges to the design of the incentive mechanism for the heterogeneous networks. In this paper, we propose a credit-based incentive mechanism for heterogeneous networks with both wired and wireless nodes. We consider a P2P streaming network where each peer can serve as an uploader and a downloader at the same time. When a peer uploads data chunks to other peers, it can earn certain credits for providing the service. When a peer downloads data chunks from other peers, it has to pay certain credits for consuming the resource. A peer's net contribution to the network is reflected by its accumulated credits. A Stackelberg game is formulated to provide differentiated service to peers with different credits. Particularly, peers' heterogeneity and selfish nature are taken into consideration when designing the utility functions. The main contributions and key results of this paper are summarized as follows. • A credit-based incentive mechanism based on Stackelberg games is proposed for P2P streaming networks. To the best of our knowledge, this is the first work that applies the Stackelberg game to the incentive mechanism design for P2P streaming networks. • Peers' heterogeneity is taken into consideration when designing the utility functions for the Stackelberg game. Thus, our incentive mechanism can be applied to heterogeneous P2P networks with wired and wireless peers having different connection bandwidths. • The selfish nature of peers is taken into consideration when designing the utility functions for the Stackelberg game, i.e., every peer is a strategic player with the aim to maximize its own benefit. This makes our incentive mechanism perform well in a P2P network environment with non-altruistic peers. • The optimal pricing strategies for the uploader and the optimal purchasing strategies for the downloader are both derived. The Stackelberg equilibrium is then obtained and shown to be unique and Pareto-optimal. • Two fully distributed implementation schemes are proposed based on the obtained theoretical results. It is shown that each of these schemes has its own advantages. • The impact of peer churn on the proposed incentive mechanism is analyzed. It is shown that the proposed mechanism can adapt to dynamic events such as peers joining or leaving the network. The remaining parts of this paper are organized as follows. In Sections II and III, we present the related work and describe our system model. In Sections IV and V, we present the problem formulation and its optimal solution. In Sections VI and VII, we propose two implementation schemes and study the impact of peer churn on the proposed schemes. Numerical results are given in Section VIII to evaluate the performance of the proposed schemes. Then, we discuss some possible extensions of this work in Section IX. Section X concludes the paper. II. RELATED WORK A simple incentive mechanism for P2P systems is the "tit-for-tat" strategy, where peers receive only as much as they contribute. A free rider that does not upload data chunks to other peers cannot get data chunks from them and suffers from poor streaming quality. Due to its simplicity and fairness, this scheme has been adopted by BitTorrent [5]. Though this strategy can increase the cooperation between peers to a certain level, it is shown in literature [6]- [8] that it may perform poorly in today's internet environment due to the asymmetry of the upload and download bandwidths. Unlike the "tit-for-tat" strategy, which enforces compulsory contribution from peers, another category of incentive mechanisms stimulate peers to contribute to the system by indirect reciprocity [9]- [17]. In these incentive mechanisms, the contribution of each peer is converted to a score which is then used to determine the reputation or rank of the peer among all the peers in the network. Peers with a high reputation are given a certain priority in utilizing the network resources, such as selecting peers or desirable media data chunks. Therefore, peers with a high reputation have more flexibility in choosing desired data suppliers and thus are more likely to receive high-quality streaming. On the other hand, peers with a low reputation have quite limited options in parent-selection and thus receive low-quality streaming. Through this way, the P2P systems can provide differentiated service to peers with different reputation values. Hence, peers are motivated to contribute more to the P2P system to earn a higher reputation. Recently, game theory [18] is found to be a powerful tool to study strategic interactions among rational peers and design incentive mechanisms to stimulate the cooperation among peers for P2P streaming systems. This is due to the fact that peers are selfish and strategic players in P2P streaming systems. It is their inherent nature to maximize their payoffs while simultaneously reducing their cost, i.e., enjoying a high quality streaming service while consuming least of their own resource. Game theory has been widely used in studying strategic interactions among these peers [19]- [29]. In [19]- [23], the authors discussed how to apply game theory to the design of incentive mechanisms for P2P networks at a high level. It is pointed out that straightforward use of results from traditional game theory do not fit well with the requirements of P2P networks. The utility functions must be customized for P2P networks. In [24], a repeated static game called Cournot Oligopoly game was formulated to model the interactions between peers, and an incentive mechanism was proposed by analyzing and solving the game. In [25], a simple, selfish, link-based incentive mechanism for unstructured P2P file sharing systems was proposed. It was shown that a greedy approach is sufficient for the system to evolve into a "good" state under the studied game model. In [26], an incentive mechanism was proposed for P2P networks based on the Bayes game. In [27], an infinitely repeated game was formulated to analyze the interactions between peers, and a so-called credit line mechanism was proposed to stimulate cooperation between peers. In [28], based on the first-price auction procedure, a paymentbased incentive mechanism was proposed for P2P streaming networks. Whereas, in [29], a non-cooperative competition game was used to provide service-differentiated resource allocation between competing peers in a P2P network. Different from these work, to the best of our knowledge, our work is the first work that models the peers' interactions as a Stackelberg game. Particularly, we take the peers' heterogeneity (wired/wireless peers with different connection bandwidths) into consideration when designing the utility functions for the Stackelberg game. Besides, two distributed implementations of the mechanism with different complexity are proposed to handle the difference in computing capability between wired and wireless nodes. III. SYSTEM MODEL In this paper, we consider a P2P streaming network where all the peers can serve as the uploader and the downloader at the same time. To eliminate the free-riding phenomenon and encourage cooperation between peers, we introduce the concept of credit into the system, where peers earn credits for providing service and consume credits for receiving service. We assume that all the peers are selfish and rational. Their aim is to maximize the credits that they can earn by fully utilizing their available network resource. Each peer has the right to set up a price for the service that it provides based on its own benefits. For fairness considerations, we assume that the uploader can only adopt the uniform pricing strategy, i.e., it cannot set different prices for different peers for the same amount of bandwidth allocation. In this paper, the credit of peer i is denoted by c i . The connection type (i.e., the download capacity) of peer i is denoted by d i . The download bandwidth allocation for peer i is denoted by x i . The upload bandwidth of peer k is denoted by u k . We denote the set of peers that request data chunks from peer k as S k . To avoid trivial bandwidth allocation schemes, we assume that i∈S k d i > u k . As illustrated in Fig. 1, the downloaders send their credits and connection types to the the uploader together with their data request. The uploader then decides the bandwidth price and allocates the bandwidth to requesters based on their credits and connection types. For example, suppose there are 100 peers requesting data chunks from peer i, but peer i can only provide service to 20 peers at the same time due to its limited upload bandwidth. Then, peer i can set up a high price that only 20 peers can accept, and the remaining 80 peers will give up due to the high cost. Through this way, peers with more credits are actually given a higher priority in utilizing network resources, and service differentiation for peers with different credits is thus realized. In this paper, the above service differentiation scheme is realized by a Stackelberg resource allocation game which is investigated in the following section. IV. STACKELBERG GAME FORMULATION A Stackelberg game is a strategic game that consists of a leader and several followers competing with each other on certain resources. The leader moves first and the followers move subsequently. In this paper, we formulate the uploader that has media data chunks as the leader, and the downloaders that request for media data chunks as the followers. The uploader (leader) imposes a price on each unit of bandwidth providing to each downloader. Then, the downloaders (followers) determine their optimal download bandwidths to maximize their individual utilities based on the assigned bandwidth price. The Stackelberg Game consists of two parts: the uploading game at the uploader side and the downloading game at the downloader side, which are introduced in the following two subsections, respectively. A. Uploading Game Design Under the Stackelberg game model, for the uploader k, if we denote its price on each unit of bandwidth providing to each downloader as µ, then its revenue maximization problem can be formulated as Uploading Game: where x i is the bandwidth that peer i intends to purchase, and x i is a function of the bandwidth price µ, i.e., x i f i (µ). S k denotes the set of peers that request data chunks from peer k, and u k is the total available upload bandwidth of peer k. Under the Stackelberg game formulation, the amount of bandwidth that peer i intends to purchase is decreasing in the bandwidth price µ. On the other hand, it is observed from (1) that the revenue of the uploader is the sum of products of the bandwidth price and individual peer's purchased bandwidth. Therefore, the uploader must carefully design its bandwidth pricing strategy in order to maximize its revenue. B. Downloading Game Design At the downloader side, for each peer i that requests data chunks from the uploader, the utility maximization problem can be formulated as where is the performance satisfaction factor for peer i, c i is the credits that peer i has, and d i is the maximum download bandwidth of peer i. The performance satisfaction factor s i reflects the degree of satisfaction or the "happines" of the downloader under the received bandwidth x i . A log function is adopted to model this factor due to the fact that log functions are shown in literature to be suitable to representing a large class of elastic data traffics including the media streaming service [30], [31]. When the received bandwidth x i = 0, the satisfaction factor s i is equal to 0, which indicates that peer i is unsatisfied with its system performance. On the other hand, when the received bandwidth x i = d i , the satisfaction factor s i is equal to 1, which indicates that peer i is fully satisfied with its system performance. The degree of satisfaction increases with the increase of the received bandwidth x i . It is also observed from (3) that the utility function of the downloader consist of two parts: c i s i and µx i . c i s i is the credits that peer i is willing to pay for the service it received, while µx i is the cost that peer i has to pay for obtaining the bandwidth x i . Obviously, with a larger bandwidth x i , peer i can obtain more satisfactory system performance, and thus is willing to pay more credits. On the other hand, the cost increases with the increase of the bandwidth x i . Therefore, optimal strategies are needed for a rational peer to balance its cost and the achieved system performance in order to maximize its utility. C. Stackelberg Equilibrium The uploading game and the downloading game together form a Stackelberg game. The objective of this game is to find the Stackelberg Equilibrium (SE) point(s) from which neither the leader nor the followers have incentives to deviate. For the proposed Stackelberg game, the SE is defined as follows. Definition 3.1: Let µ * be a solution for the uploading problem and x * i be a solution for the downloading game of the ith peer. Then, the point (µ * , x * ) is a SE for the proposed Stackelberg game if for any (µ, x) with µ > 0 and x 0, the following conditions are satisfied: where U up (·) and U down (·) are the utilities of the uploading game and the downloading game, respectively. For the proposed game in this paper, the SE can be obtained as follows: For a given price µ, the downloading game is solved first. Then, with the obtained best response functions x * of the downloaders, we solve the uploading game for the optimal price µ * . V. OPTIMAL RESOURCE ALLOCATION STRATEGIES In this section, we investigate the optimal resource allocation strategies for the proposed Stackelberg game, i.e., the optimal bandwidth allocation for the downloading game and the optimal pricing strategy for the uploading game. A. Optimal Download Bandwidth For a given µ, the optimal bandwidth x * i for peer i is given in the following theorem. Theorem 4.1: For a given µ, the optimal solution for the downloading game is Proof: The Lagrangian of the downloading game can be written as where α and β are the nonnegative dual variable associated with the constraints. The dual function is q(α, β) = max x i L(x i , α, β). The Lagrange dual problem is then given by min α≥0,β≥0 q(α, β). The duality gap is zero for the convex problem addressed here, and thus solving its dual problem is equivalent to solving the original problem. Thus, the optimal solutions needs to satisfy the following Karush-Kuhn-Tucker (KKT) conditions [32]: From (11), it follows Then, from (9), it follows α = 0. Therefore, (12) reduces This contradicts the presumption. Therefore, from (9), it follows Similarly, we can prove that Remark: It is observed from (7) that x * i is a piecewise function of the price µ. If the price µ is very high, the optimal download bandwidth x * i for peer i is 0; if the price µ is very low, peer i will download at its maximum bandwidth. In general, x * i is a decreasing function of µ. This indicates that the uploader can easily control the bandwidth allocated to peer i by controlling the price µ assigned for peer i. Besides, some key observations obtained from (7) are listed as follows. • Under the same prescribed price µ, comparing with the same type (i.e., the same d i ) of peers with higher contributions (i.e., more credits), low contributors are more likely to be rejected from downloading. The uploader can easily reject a low contributor i from downloading by setting a price larger than c i d i ln 2 . • Under the same prescribed price µ, more bandwidth is allocated to the peer with higher contributions for the same type of peer. High contributors are more likely to download at their maximum download bandwidth. B. Optimal Pricing Strategy With the results obtained in (7), we are now ready for solving the uploading game. Using f i (µ) to denote the x i obtained in (7), the uploading game can be rewritten as The above problem is difficult to solve since f i (µ) is a piece-wise function of µ. Therefore, to solve P1, we first consider the two-peer scenario, and then extend the results to the multi-peer scenario. 1) Two-peer Scenario: In this scenario, we consider the case that only two peers request data chunks from the uploader, and we assume that they are sorted in the order Then, the thresholds given in (7) of the two peers may have the following two possible orders. Before we start the analysis for the above two cases, the upper limit and the lower limit of the optimal price µ * are given out in the following two propositions. According to (7), we have x 1 = 0 and x 2 = 0. The resulting revenue for the uploader is zero. It is easy to see that we can find another pricing strategy µ ′ that satisfies µ ′ < c 1 d 1 ln 2 and can generate a revenue larger than zero. This contradicts with our presumption. Thus, µ * ≥ c 1 d 1 ln 2 does not hold. Therefore, µ * must be less It is clear that x 1 and x 2 will not increase if the uploader sets a lower µ, which indicates that the uploader cannot increase its revenue by setting a price lower than c 2 2d 2 ln 2 . Therefore, the minimum value for the optimal price µ * is c 2 2d 2 ln 2 . Now, we solve P1 for the two-peer scenario under the above two cases, respectively. First, we consider Case I: To derive the optimal price µ * , we consider August 5, 2014 DRAFT the following three possible intervals. In this case, based on (7), we have x 1 = c 1 µ ln 2 − d 1 and x 2 = 0. As a result, P1 is reduced to a convex optimization problem, and it can be solved that µ * = c 1 (u k +d 1 ) ln 2 . Using the same approach as [33], it can be shown that µ * is the optimal solution for P1 if and only if c 2 In this case, based on (7), we have As a result, P1 becomes a convex optimization problem, and it can be solved that µ * = c 1 +c 2 (u k +d 1 +d 2 ) ln 2 . Same as Case I-a, it can be shown that the obtained µ * is the optimal solution for P1 if and only if c 1 Thus, it follows that • Case I-c: µ * ∈ c 2 2d 2 ln 2 , c 1 2d 1 ln 2 . In this case, based on (7), we have Same as the previous two subcases, P1 is reduced to a convex problem, and it follows that µ * = c 2 (u k −d 1 +d 2 ) ln 2 . It is the optimal solution for P1 if and only if c 2 Based on the above results, the optimal pricing strategy for the uploader under Case I can be summarized as Remark: It is observed from (19) that the optimal price can be divided into three regions based on the uploader's available bandwidth. Based on the demand of the peers and the supply of the uploader, the three regions are named as insufficient region, balance region, and sufficient region. In the insufficient region, the uploader's bandwidth is not enough to support all the peers. In this region, at least one peer will be not assigned any bandwidth. Peers are excluded from the game based on their c i d i values. Peers with low values are rejected first. For instance, in Case I-a, peer 2 is rejected from the game and only peer 1 remains in the game. In the balance region, the uploader will allocate bandwidth to each peer. However, none of the peers can download at its maximum bandwidth d i . In this region, the uploader allocates its limited bandwidth to the peers proportional to their c i d i values. In the sufficient region, the uploader's bandwidth is able to support both the peers, and at least one of them can download at its maximum download bandwidth. The peer with the largest c i d i will be the first peer that can download at its maximum download bandwidth. When the bandwidth is sufficiently large, both peers can download at their maximum download bandwidth. Now, we consider Case II: c 1 d 1 ln 2 > c 1 2d 1 ln 2 > c 2 d 2 ln 2 > c 2 2d 2 ln 2 . Similar as Case I, we consider different intervals to find the optimal price µ * in each interval. • Case II-a: µ * ∈ c 1 2d 1 ln 2 , c 1 d 1 ln 2 . In this case, based on (7), we have x 1 = c 1 µ ln 2 − d 1 and x 2 = 0. Same as Case I-a, P1 becomes a convex optimization problem, and it can be solved that µ * = c 1 (u k +d 1 ) ln 2 . Using the same approach as [33], it can be shown that µ * is the optimal solution for P1 if and only if c 1 2d 1 ln 2 ≤ c 1 (u k +d 1 ) ln 2 < c 1 d 1 ln 2 , i.e., d 1 ≥ u k > 0. Thus, it follows that • Case II-b: µ * ∈ c 2 d 2 ln 2 , c 1 2d 1 ln 2 . In this case, we show that the maximum possible utility for the uploader is lower than that obtained in Case II-a, and hence the optimal price will never lie in this range. Based on (7), we have x 1 = d 1 and x 2 = 0. P1 is thus reduced to finding the maximum value of µd 1 , and is valid only when d 1 ≤ u k . Since µ ∈ c 2 d 2 ln 2 , c 1 2d 1 ln 2 , the upper bound of µd 1 is c 1 2 ln 2 . However, when d 1 ≤ u k , it is observed from Case II-a that the maximum revenue is µ * c 1 µ * ln 2 − d 1 , where µ * is given by (20). Thus, µ * c 1 µ * ln 2 − d 1 can be computed as c 1 u k (u k +d 1 ) ln 2 , which is larger than c 1 2 ln 2 when d 1 ≤ u k . Thus, µ * should not lie in this range. • Case II-c: µ * ∈ c 2 2d 2 ln 2 , c 2 d 2 ln 2 . In this case, based on (7) Based on the above results, the optimal pricing strategy for the uploader under Case II can be summarized as Remark: It is observed from (22) that the optimal price obtained under Case II can be divided into two regions based on the uploader's available bandwidth. We refer to these two regions as insufficient region and sufficient region. In the insufficient region, the uploader will only accept the request from peer 1, which is the peer with high c i d i value. In the sufficient region, the uploader will allocate peer 1 its maximum download bandwidth. It is observed that the price strategy will allocate bandwidth to peer 2 only when peer 1 is allocated its full download bandwidth d 1 . This is quite different from the scenario in Case I. This phenomenon happens due to the fact that the peer 1's c i d i value is much larger than that of peer 2. In summary, the procedure to find the optimal price for the two-peer scenario is given in Fig. 2. It is not difficult to observe that the optimal price is determined by the following two factors: • The order of the downloaders' thresholds. • The uploader's available upload bandwidth. August 5, 2014 DRAFT Once these two factors are determined, the optimal price can be easily obtained. From an economic perspective, the order of the downloaders' thresholds actually reflect the demand and the purchasing power (i.e., the available accumulated credits) of the downloaders. The uploader's upload bandwidth reflects the market supply. The price of the goods is determined by the relationship between the supply and demand. Another key result observed from the above solutions is that the optimal price µ * is always obtained when (15) holds with equality, i.e., i∈S k f i (µ * ) = u k . This observation is very important, and plays a significant role in determining the Stackelberg equilibrium of the proposed game, which will be discussed later in Subsection 5.3. 2) Multi-peer Scenario: For the multi-peer scenario, there are more cases, and the number of cases increases with the increase of the number of peers. Thus, in general, we are not able to obtain a closed-form solution for the multi-peer scenario. However, once the order of the peers' thresholds is determined, a closed-form solution can be obtained. For the purpose of illustration, we derive the closed-form solution for P1 when the thresholds of the peers satisfy the following where | · | denotes the cardinality of a set. To avoid trivial solutions, we assume that i∈S k d i > u k in the following analysis. Due to the complexity of f i (µ), P1 is difficult to solve directly. Therefore, to solve P1, we first consider the following problem This problem is a convex optimization problem. Therefore, this problem can be solved by standard convex optimization techniques. Details are omitted here for brevity. The optimal price µ for P2 can be obtained as follows, Now, we relate the optimal solution of P2 to that of P1 in the following proposition. Proposition 4.3: The price µ given in (25) is the optimal solution of P1 if and only if This proof consists of the following two parts. Part 1: Sufficiency. The optimal price µ * given by (25) is the optimal solution of P1 if 2d i , these inequalities can be compactly written as The "if" part is thus proved. Next, we consider the "only if" part, which is proved by contradiction as follows. Part 2: Necessity. For the ease of exposition, we assume that the peers are sorted by the following order: In order to prove the necessity, we suppose that the price µ * given in (25) is optimal even if the inequality given in (26) does not hold. We consider on possible region for µ * below, and a similar proof applies to the other regions. Suppose u k satisfies the following inequality and µ * given by (25) is still optimal when (28) holds. Since Then, according to (25), we have that µ * ≥ c |S k | d |S k | . Then, it follows from (7) that x |S k | = 0. This indicates that the peer with the smallest c i d i will be excluded from the game under the above condition. Then, it follows that µ * must be the optimal solution of P1 with |S k | − 1 peers, which is given as follows Thus, under the condition given by (28), using the same way as the proof of the previous "if" part, it can be shown that the optimal solution for this problem is given bỹ It is easy to observe that the optimal priceμ * given in (30) for the above problem is different from µ * given by (25). Thus, this contradicts with our presumption that µ * is optimal for P1 with u k satisfying (28). Using the same method, we can prove that µ * is not the optimal for P1 for other regions. Therefore, the interference vector µ * given by (25) is the optimal solution of P1 only if u k satisfying (27). The "only if" part thus follows. By combining the proofs of both the "sufficiency" and "necessity" parts, Proposition 4.3 is thus proved. With the results obtained above, we can solve a series of similar sub-problems of P1. Then, combing these obtained results by the same approach as the two-peer scenario, we can obtain the following theorem. Theorem 4.2: When the thresholds of the peers satisfy h 1 > · · · > h |S k | > h 1 /2 > · · · > h |S k | /2, where h i c i /d i , the optimal price µ * for P1 is then given by where For other cases of the multi-peer scenario, closed-form solutions can also be obtained in the same way. In general, the optimal pricing strategy for the multi-peer scenario can be obtained by the same procedure illustrated in Fig. 2. For the same type of peer (i.e., the same d), the optimal pricing scheme tends to allocate more bandwidth to peers with higher contribution (i.e., more credits c). This indicates that the obtained pricing strategy for the multi-peer scenario can provide a strong incentive for peers to cooperate with each other. It is also observed that the optimal price µ * is always obtained when (15) holds with equality, i.e., i∈S k f i (µ * ) = u k . C. Stackelberg Equilibrium of the Proposed Game In this subsection, we investigate the SE for the proposed Stackelberg game, and show the SE is unique and Pareto-optimal when u k is given. With the optimal solution obtained in Theorem 4.1 and 4.2, the SE for the proposed Stackelberg game is given as follows. Theorem 4.3: The SE for the Stackelberg game formulated by the uploading game and the downloading game is (x * , µ * ), where x * is given by (7), and µ * is the optimal solution of P1. Similarly, since x * is the optimal solution for the downloading game, we have U down Then, combining the above two facts, according to the definition of SE given in Definition 3.1, (x * , µ * ) is the SE for the proposed Stackelberg game. Now, we show that the SE is unique and Pareto-optimal when u k is given. Theorem 4.4: The SE for the proposed Stackelberg game is unique and Pareto-optimal for a given u k . Proof: First, we show that the SE for the proposed Stackelberg game is unique for a given u k . As pointed out in the previous subsection, the optimal pricing strategy is unique when the order of peers' thresholds and u k are given. The order of peers' thresholds is determined by the values of c i and d i , ∀i, which are fixed during each implementation of the Stackelberg game. Thus, it is clear that the optimal price µ * is unique for a given u k . On the other hand, it is observed from (7) that the download bandwidth for each peer is unique under a given µ. Thus, it is obvious that the SE for the proposed game is unique under a given u k . Now, we show that the SE is Pareto-optimal for a given u k . Given an initial resource allocation scheme among a group of peers, a change to a different allocation scheme that makes at least one peer better off without making any other peers worse off is called a Pareto improvement. An allocation scheme is defined as "Pareto-optimal" when no further Pareto improvements can be made. In other words, in a Pareto-optimal equilibrium, no one can be made better off without making at least one individual worse off. It is observed that the optimal µ * always satisfies i∈S k f i (µ * ) = u k . Thus, increasing one peer's (e.g., peer 1) bandwidth allocation will inevitably decrease another peer's (e.g. peer 2) bandwidth allocation. This makes peer 2's bandwidth allocation deviates from its optimal bandwidth allocation, and consequentially decreasing its utility. Thus, no peer can be made better off without making some other peer worse off, and the SE is Pareto-optimal. VI. IMPLEMENTATION OF THE STACKELBERG GAME IN P2P STREAMING NETWORKS In previous section, we have solved the proposed Stackelberg game and obtained its SE. In this section, we investigate how to implement the proposed game in P2P networks in detail. Two implementation methods referred to as direct implementation and bargaining implementation are proposed and investigated as below. A. Direct Implementation Direct implementation is strictly based on the obtained results given in Section V. It is a one-round implementation with four stages, which are described as follows. and determines the order of peers' thresholds. Then, the uploader computes the optimal price µ * using the same approach as illustrated in Fig. 2, and broadcasts the optimal price µ * to all the peers. • Stage 3: Based on the received price, each peer computes its optimal download bandwidth x * i based on (7), and sends the calculated results to the uploader. • Stage 4: The uploader allocates the bandwidth based on x * i , ∀i ∈ S k and starts streaming. B. Bargaining Implementation In this subsection, we propose the bargaining implementation for the proposed Stackelberg game based on the characteristics of P1. It can be shown that P1 is equivalent to the following problem P3: max Then, we propose the bargaining implementation based on the following two facts: is a decreasing function of µ, which can be observed from (7). (ii). The upper limit of µ * is This can be proved using the same approach as Proposition 4.1. • Stage 1: The uploader sets an initial price µ (where µ ≥ max i c i d i ln 2 ), and broadcasts it to all the downloaders. • Stage 2: Each downloader computes its optimal download bandwidth x i based on (7) for the given µ, and send back x i to the uploader. • Stage 3: Having received x i from all the peers, the uploader computes the total demand i∈S k x i , and compares the total demand with its upload bandwidth u k . Assume that ǫ is a small positive constant that controls the algorithm accuracy. If i∈S k x i < u k − ǫ, the uploader decreases the bandwidth price by ∆µ, where ∆µ is a small step size. After that, the uploader broadcasts the new price to all the downloaders. • Stage 4: Stage 2 and Stage 3 are repeated until | i∈S k x i − u k | < ǫ. Then, the uploader starts streaming. The convergence of the bargaining algorithm is guaranteed by the following facts: (i). The optimal price µ * is always obtained when the upload bandwidth of the uploader is fully allocated, i.e., i∈S k x i = u k . (ii). f i (µ) is a decreasing function of µ. (iii). The SE for the proposed Stackelberg game is unique and Pareto-optimal for a given u k . C. Direct Implementation Vs. Bargaining Implementation In this subsection, we analyze and compare the difference between these two kinds of implementation schemes. It is not difficult to observe that the direct implementation is time-saving, since it only needs one-round to determine the optimal bandwidth price and the optimal download bandwidth of the peers. In contrast, the bargaining implementation requires much more time. This is due to the fact that the uploader and the downloaders have to go through a multi-round bargaining process to finally reach the equilibrium. Thus, for delay-sensitive service, such as P2P multimedia streaming, direct implementation is preferred. Another difference between these two implementation schemes is the requirement on the computing power of the uploader. It is observed that direct implementation requires the uploader to directly compute the optimal price based on the procedure given in Section V-B, which is a complex procedure involving a lot of cases. Thus, it has a high requirement on the computing power of the uploader. In contrast, the bargaining implementation greatly relieves the computation burden on the uploader. The uploader only needs to compare the total demand with its upload bandwidth, which requires much less computing power. Thus, the bargaining implementation should be preferred by the handheld mobile devices with less computing power. It is worthy pointing out that no matter which implementation scheme is employed, the same Stackelberg equilibrium results for the same set data. This is due to the fact that the SE is unique which is proved in Theorem 4.4. VII. DEALING WITH DYNAMICS OF P2P STREAMING NETWORKS P2P networks are dynamic in nature. Peers may leave or join the network at any time. How the equilibrium changes when peers leave or join the network is of great importance to the study of a dynamic network. Thus, in this subsection, we investigate whether the equilibrium will change and how it will change under these situations. When a peer joins the network, it is given a certain number of credits. The initial credits for each peer can be the same (e.g., 100 credits for each peer) or different (e.g., c i for peer i). The credits of a peer is updated after each transaction. One transaction means that a downloading peer finished its downloading from a uploader. After one transaction of downloader i, its credits c i is updated by c i = c i −µ * x i , and the credits of the uploader j is updated by c j = c j +µ * x i . If multiple downloader finish their downloading at the same time, the uploader updates its credits by collecting credits from them together. To facilitate the analysis, we assume that there are N downloading peers and 1 uploading peer at the original SE. The original SE is denoted by (µ * , x * ), where x * is the optimal bandwidth allocation vector for the downloading peers at the SE. The new SE after peers leaving or joining the network is denoted by (μ * ,x * ). Besides, we assume that the information that a downloader leaves or joins the network is only available to the uploader itself, and it will not share the information with other downloaders. A. Peers Leaving the P2P Streaming Network When a peer j leaves a P2P streaming network, the SE changes only when where U up (μ * ,x * ) denotes the utility of the uploader at the new SE, U up (µ * , x * ) denotes the utility of the uploader at the original SE, and U up (µ * , x j ) denotes peer j's contribution to the uploader's utility at the original SE. For the problem considered in this paper, the inequality (34) always holds. Thus, when a downloading peer leaves the network, the best strategy is to re-implement the Stackelberg game with the remaining N − 1 peers. B. Peers Joining the P2P Streaming Network When a peer joins the network, the SE changes only when When a peer joins the network, the number of competing peers increases. As the competition between downloading peers becomes fiercer, the uploading peer has the incentive to increase the price of the resource to increase its revenue. It is worth pointing out that (35) does not always hold. For example, if the c i /d i value of the joining peer is very small, this peer will be rejected, and the SE will be sustained. A simple way to re-attain the equilibrium is to completely re-implement the Stackelberg game again with the N + 1 peers. This method is guaranteed to reach a new SE which is unique and interests. This is due to the fact that i∈S k x i = u k always holds at the equilibrium. If a new peer joins and is allocated a certain amount of download bandwidth, some of the existing peers' download bandwidth must decrease. For some peers, even though their download bandwidth may not decrease, their utility decreases due to the increase of the resource price. VIII. PERFORMANCE EVALUATION In this section, several numerical examples are provided to evaluate the performance of the proposed incentive resource allocation scheme. It is shown that the proposed resource allocation scheme can provide strong incentives for peers to contribute to the P2P network. A. Example 1: Peers with the same connection type but different contribution values In this example, we assume that there are four peers requesting data chunks from the uploader This illustrates that the proposed resource allocation strategy can provide differentiated service to peers with different contribution, and thus encourage peers to contribute to the network. B. Example 2: Peers with the same contribution value but different connection types In this example, we assume that there are four peers requesting data chunks from the uploader k. We assume the contribution values of the requesting peers are the same, and are given by It is observed from Fig. 5 that when the price is low, every peer can download at its maximum download bandwidth. The bandwidth assigned to each peer decreases with the increase of the price. It is also observed that our resource allocation scheme biases toward peers with smaller download capacities. This is as expected. Intuitively, given the same unit of bandwidth resource, a peer with a smaller download capacity achieves a higher performance satisfaction factor than a peer with a larger download capacity. Fig. 6. It is observed from Fig. 6 that our resource allocation scheme gives a higher priority to peers with higher contribution in bandwidth assignment. When the available upload bandwidth u k is small, the uploader will reject the request from the peers with low contribution, and provide the limited resource to the peers with high contribution. When the available upload bandwidth u k is large, the uploader will try to meet every peer's request. However, peers with higher contribution values are given a higher priority in obtaining the bandwidth. It is also observed that with the increasing of u k , the bandwidth assigned for each peer increases. This is due to the fact that the uploader's utility is maximized only when it contributes all its available upload bandwidth. D. Example 4: Join of Competing Peers In this example, for the purpose of comparison, we use the same system setup and simulation parameters as [29]. We assume the uploader's available bandwidth u k is 2 Mb/s. There are four competing peers requesting data chunks from the uploader k. The connection types of the requesting peers are assumed to be different, and are given by Fig. 7 that the equilibrium of the game changes whenever a new competing peer joins the game. The bandwidth allocation for the existing peers decrease due to the newcomer. This is in accordance with our analysis given in Section VII-B. It is also observed that the uploader assigns all its bandwidth without reservation at each new equilibrium. Besides, at each equilibrium, the bandwidth allocation is proportional to the contribution value of each peer. This indicates that the proposed incentive mechanism is adaptive to the dynamics of the P2P network, and can always provide differentiated service to peers with different contribution values. It is also observed that the proposed scheme can achieve the same performance as that of [29]. E. Example 5: Leave of Competing Peers In this example, we consider an opposite scenario of Example 4. We consider the scenario that peers leave the system one by one. For the convenience of analysis, we use the same system setup and simulation parameters as example 4. We assume that the four peers join the network at t = 20s, and they leave the network one by one. The leave times of peer 4, 3, 2 are From Fig. 8, it is observed that the equilibrium of the game changes whenever a competing peer leaves the network. The bandwidth allocation for the existing peers increase due to the leave of peers. This is in accordance with our analysis given in Section VII-A. It is also observed that the bandwidth assignment is proportional to the contribution value of each peer at each equilibrium. This indicates that the proposed incentive mechanism is robust to the dynamics of the P2P network, and can always provide differentiated service to peers with different contribution values. It is also interesting to observe that the equilibriums of Example 5 are exactly the same as those of the Example 4 for the same number of peers. This is due to fact that the uploader's utility obtained by selling the remaining bandwidth to the remaining peers is larger than that obtained by maintaining the current status in this example. A. Competition among Multiple Uploaders In this paper, we consider a simple model where there is one uploader and multiple downloaders. In reality, there may exist multiple uploaders having overlapping data chunks. This implies that there may exist a competition among these uploaders, which may affect the pricing strategies of the uploaders. Our incentive mechanism can be applied to this scenario with few modifications. The presence of multiple uploaders induces a subgame that involves the peers choosing the uploaders. This adds an additional "Step 0: Peers choose their uploader." to the proposed algorithms. Given any set of choices by the peers, a Stackelberg game is induced at each uploader, which can be solved for a unique SE according to the analysis done in previous sections. Thus, the key issue is how to choose the uploaders. When a new peer joins the network, it has to choose one uploader from multiple uploaders that have its desired data chunks. The prices at these uploaders observed by the newcomer at this moment are fixed. Thus, it is reasonable for the newcomer to choose the uploader with the lowest price. However, it is worth pointing out that this scheme is in general suboptimal. This is due to the fact that the price at the new SE with the newcomer may not be the same as the price at the old SE without the newcomer. Thus, it is possible that the newcomer unilaterally deviate in its choice of the uploader to achieve a higher utility. If we take this into consideration, the game will become very complex and highly difficult to analyze. Thus, we would like to delegate this to our future work. B. Trust Issues In this paper, we focus on designing an incentive mechanism for P2P networks. However, it is worth pointing out that trust issues are also very important for P2P systems. For example, the proposed algorithms need the downloading peers to report their c i 's and d i 's to the uploader. Malicious peers may misreport their credits c i 's and their types d i 's to gain advantages against other peers. For example, a malicious peer may deliberately reports a bandwidthd i smaller than its real demand bandwidth d i to increase its priority (c i /d i ) in obtaining bandwidth. Another security issue is that malicious peers may deliberately upload polluted data chunks to other peers. Without effective measures to identify malicious peers, the polluted data chunks could be disseminated to the whole network more quickly in a P2P network with incentive mechanisms than that without incentive mechanisms. This is due to the fact that peers are motivated to upload data chunks to each other to earn points or monetary rewards in a P2P system with incentive mechanisms. Without the ability to identify malicious peers, peers are more likely to forward polluted data chunks, consequently degrading the performance of the system. To deal with these trust issues, trust management schemes are needed to identify and defend against malicious peers. In other words, incentive mechanisms must be used in trusted environments or together with reliable trust management mechanisms. Though trust management for P2P networks has been extensively studied in literature [34]- [40], joint design of trust management and incentive mechanisms for P2P networks remains unstudied. Due to the complexity and the lack of space, we leave this as our future work. X. CONCLUSION In this paper, a credit-based incentive mechanism to stimulate the cooperation between peers in a P2P streaming network is proposed. Taking the peers' heterogeneity and selfish nature into consideration, a Stackelberg game is designed to provide incentives and service differentiation for peers with different credits and connection types. The optimal pricing and purchasing strategies, which can jointly maximize the uploader's and the downloaders' utility functions, are derived by solving the Stackelberg game. The Stackelberg equilibrium is shown to be unique and Paretooptimal. Then, two fully distributed implementation schemes are proposed and studied. It is shown that each of these schemes has its own advantages. The impact of peer churn on the proposed incentive mechanism is then analyzed. It is shown that the proposed mechanism can adapt to dynamic events such as peers joining or leaving the network. Finally, several numerical examples are presented, which show that the proposed incentive mechanism is effective in encouraging peers to cooperate with each other. ACKNOWLEDGMENT We would like to express our sincere thanks and appreciation to the associate editor and the anonymous reviewers for their valuable comments and helpful suggestions. This has resulted in a significantly improved manuscript.
2014-07-23T22:30:50.000Z
2014-07-23T00:00:00.000
{ "year": 2015, "sha1": "6305dbdfe11ae139483c769362d0c94dfb7f5cfa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6305dbdfe11ae139483c769362d0c94dfb7f5cfa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256631938
pes2o/s2orc
v3-fos-license
Mechanisms of amyloid-β34 generation indicate a pivotal role for BACE1 in amyloid homeostasis The beta‑site amyloid precursor protein (APP) cleaving enzyme (BACE1) was discovered due to its “amyloidogenic” activity which contributes to the production of amyloid-beta (Aβ) peptides. However, BACE1 also possesses an “amyloidolytic” activity, whereby it degrades longer Aβ peptides into a non‑toxic Aβ34 intermediate. Here, we examine conditions that shift the equilibrium between BACE1 amyloidogenic and amyloidolytic activities by altering BACE1/APP ratios. In Alzheimer disease brain tissue, we found an association between elevated levels of BACE1 and Aβ34. In mice, the deletion of one BACE1 gene copy reduced BACE1 amyloidolytic activity by ~ 50%. In cells, a stepwise increase of BACE1 but not APP expression promoted amyloidolytic cleavage resulting in dose-dependently increased Aβ34 levels. At the cellular level, a mislocalization of surplus BACE1 caused a reduction in Aβ34 levels. To align the role of γ-secretase in this pathway, we silenced Presenilin (PS) expression and identified PS2-γ-secretase as the main γ-secretase that generates Aβ40 and Aβ42 peptides serving as substrates for BACE1’s amyloidolytic cleavage to generate Aβ34. Results BACE1's amyloidogenic and amyloidolytic in vivo activities are determined by the enzyme to substrate ratio. To test whether there is a dichotomy between the amyloidogenic and amyloidolytic roles of BACE1 ( Fig. 1) in vivo, we measured cortical Aβ levels in human brain tissue, in wild-type, BACE1 knock-out (BACE1 −/−), heterozygous mice with half of the normal amount of active BACE1 (BACE1 +/−), and in APP transgenic mice expressing the human APP gene with the London mutation V717I. First, we examined BACE1 and APP levels in post-mortem human temporal cortical samples from 20 AD patients and 5 controls (Fig. 2a- Table 1). Western blot analysis revealed that cerebral BACE1 levels were ~ 2.1-fold elevated in AD patients compared to non-AD (Fig. 2c), which is in agreement with previous studies where BACE1 protein and Figure 1. APP processing by β-and γ-secretases and amyloid degradation into Aβ34 at low and high BACE1/ APP ratios. In the amyloidogenic pathway, sequential cleavage of APP by β-secretase and γ-secretase generates Aβ species of varying lengths including Aβ38, Aβ40 and Aβ42. In the Aβ amyloidolytic pathway, Aβ peptides resulting from the production pathway can be cleaved by β-secretase at the β34 site as part of the degradation pathway yielding the C-terminally truncated Aβ species, Aβ34. Under low BACE1/APP ratio, Aβ40 and Aβ42 levels are higher than Aβ34. In contrast, under high BACE1/APP ratio, Aβ34 levels increase as more Aβ40 and Aβ42 are degraded into Aβ34 by BACE1. www.nature.com/scientificreports/ activity levels were found to be increased in the brain regions affected by amyloid deposition [25][26][27][28] . Cerebral APP levels did not differ between AD patients and non-demented controls (Fig. 2b). We hypothesized that a surplus of BACE1 would lead to increased Aβ34, given that BACE1 levels are significantly elevated in AD, while APP levels and Aβ40 and Aβ42 production rates do not change 29 . Therefore, levels of Aβ34 and the longer Aβ species, i.e., Aβ40 and Aβ42 resulting from the classical amyloidogenic processing of APP, were measured in human brain extracts using our previously developed 4-plex assay (MSD-Meso Scale Discovery) 20 . Aβ34 levels were elevated ~ 1.8 fold, which is very similar to the ~ 2.1-fold elevated BACE1 level in AD brain tissue. Thus, both cerebral BACE1 and Aβ34 levels increased approximately ~ twofold (Fig. 2d), suggesting that excess BACE1 may generate more Aβ34 in AD brain tissue. A Spearman test suggests an overall trend for the correlation (ρ = 0.3575, p = 0.0795) between relative BACE1 values and absolute Aβ34 levels. Notably, in agreement with this data, serum BACE1 activity was found to be ~ 30% higher in AD patients compared to controls 30,31 . In addition, Aβ40 and Aβ42 species were significantly elevated in the AD group by ~ 44and ~ 23-fold, respectively ( Fig. 2e and f), possibly due to aggregated amyloid as previously reported 32 . To test whether the absence of aggregated amyloid yields a similar relationship between BACE1 and the different Aβ species, we measured Aβ34, Aβ40 and Aβ42 levels in the cortices of 6 months-old wild-type (+/+), heterozygous (+/−) and homozygous BACE1 knockout (−/−) mice (3 females and 3 males for each genotype) expressing endogenous levels of APP. We observed that Aβ34 levels were significantly reduced in BACE1 +/− animals but not the Aβ40 and Aβ42 levels. The loss of one BACE1 allele led to a significant decrease in Aβ34 levels (compare BACE1 +/+ and BACE1 +/−) (Fig. 2g), while no significant effects were observed for Aβ40 and Aβ42 ( Fig. 2h and i). Unaltered levels of Aβ40 and Aβ42 were also observed by others under the condition of lowered endogenous BACE1 activity 20,[33][34][35] . Thus, Aβ34 levels positively correlate with BACE1 levels, which is not the case for Aβ40 and Aβ42 levels that remain unaltered with the loss of one BACE1 allele. Then, we utilized transgenic mice expressing human APP, i.e., in vivo overexpressing conditions, to analyze the effect of substrate overexpression. Cortical Aβ levels of wild-type animals (7 females and 3 males) were compared to cortical Aβ levels of 6 months-old (pre-plaque) mice with the London mutation driven by the Thy1 promoter (4 females and 3 males). Aβ34 levels were increased ~ 2.5 fold and Aβ40 and Aβ42 were found elevated ~ four-and ~ fivefold, respectively ( Fig. 2j-l). Western blot analysis revealed that APP transgenic mice had ~ 2.2 fold more APP and showed normal levels of BACE1 ( Supplementary Fig. 1). Altogether, the results show that amyloidogenic activity was maintained with a single copy of the endogenous BACE1 gene ( Fig. 2h and i), while amyloidolytic activity was reduced upon the loss of one BACE1 gene copy (Fig. 2g). BACE1 expression promotes Aβ34 generation from APP and APP-C99 in vitro. To further determine how increased APP or BACE1 expression is influencing the balance between amyloidogenic and amyloidolytic cleavages, we tested cells transfected with increasing amounts of cDNA of either BACE1 or APP. When corresponding Western blots were quantified, increases in sAPPβ and sAPP total were observed under both APP695 and BACE1 overexpression conditions. However, the increase in APP processing by β-secretase, indicated by sAPPβ levels, was more pronounced under BACE1 overexpression compared to APP695 overexpression ( Supplementary Fig. 2a-d). Upon dose-dependently increased BACE1 levels (Fig. 3a), Aβ34 levels started to rise above Lower Limit of Detection (LLOD) with the lowest amount of BACE1 transfected and in a linear manner (y = 0.3225x + 103.0, p < 0.0001) over the entire range (Fig. 3c). APP overexpression in HEK293T cells resulted in increased levels of APP but left Aβ34 levels unaltered (Fig. 3b and c). These results demonstrate that a surplus of BACE1, but not of APP, promotes amyloidolytic cleavage yielding higher Aβ34 levels in non-neuronal cells where endogenous BACE1 expression is naturally low 36 . To study Aβ34 formation independently from prior β-secretase cleavage of APP by BACE1 (cleavage at the Asp 1 residue), we used a construct that encodes for the immediate γ-secretase substrate β-CTF, termed as APP-C99 37,38 (Fig. 3d). HEK293T cells were co-transfected with both increasing concentrations of BACE1 and a constant amount of APP-C99 and expression was verified by Western blot (Fig. 3d and e). Aβ34 levels were below LLOD under mock condition. A steady rise of Aβ34 levels was observed in BACE1 and APP-C99 co-transfected . BACE1 overexpression and co-expression with APP-C99 enhanced Aβ34 production from Aβ40 and Aβ42. Expression of APP, APP-C99, and BACE1 and Aβ34 generated from endogenous levels of APP and under APP and APP-C99 overexpression conditions (wild-type APP-C99 and APP-C99 M35I mutant) were analyzed by Western blot and ELISA, respectively. Uncropped blots are included in a Supplementary Information file. HEK293T cells were transfected with indicated increasing amounts of cDNA coding for BACE1 (a) or APP695 (b) or APP-C99 and BACE1 (d) or APP-C99 M35I and BACE1 (e). Representative Western blots from 5 independent experiments for the examination of APP, BACE1, sAPPβ and sAPP total expression (a, b, d and e). Quantification of absolute amounts of Aβ34 by ELISA (c and f). Aβ generation from BACE1 and/or APP-C99 overexpressing HEK293T cells was analyzed by ELISA, and immunoprecipitation (IP) Matrix Assisted Laser Desorption/Ionization (MALDI) mass spectrometry (MS). Cells were transfected with APP-C99, BACE1, and/or empty vector (Mock). Quantification of absolute amounts of Aβ34 (g), Aβ40 (h), and Aβ42 (i) with specific ELISAs. Aβ species were immunoprecipitated with monoclonal W02 and analyzed by MALDI-MS. Representative spectra from 3 independent experiments (j and k). Bars Fig. 2e and f). To prove that Aβ34 generation depended on the cleavage at the β34 site, we used an engineered mutant construct, where amino acid residue 35 (M35) of APP-C99 was mutated to Ile encoding for APP-C99 M35I (Fig. 3e). Under these conditions amyloidolytic cleavage at the β34 site was abolished (Fig. 3f). The quantitative analysis of the conditioned media from APP-C99 and BACE1 co-transfected HEK293T cells showed that BACE1 overexpression increased Aβ34 levels (Fig. 3g) while Aβ40 and Aβ42 levels were diminished ( Fig. 3h and i). Surprisingly, Aβ34 peptides were the predominating species under BACE1 and APP-C99 coexpressing conditions, as verified in immunoprecipitates by Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry (MALDI-MS) ( Fig. 3j and k). Qualitative results confirm that longer and shorter Aβ species are released by cells only overexpressing APP-C99 ( Fig. 3k) but increased BACE1 levels correspond with increased detection of Aβ34 species. Cellular localization of BACE1 modulates its amyloidolytic activity. Next, we verified whether Aβ34 is generated in the endo-lysosomal system. We tested BACE1 mutants with amino acid substitutions in the acidic cluster motif, DDISLL (residues 495-500 of BACE1 contained within its cytosolic C-terminal domain) that are well-known for altering intracellular localization and trafficking of BACE1. Notably, substitutions of D495 or L499-L500 in the [DE]XXXL[LI] signal 39 were described to decrease endosomal localization and increase plasma membrane localization of BACE1 40,41 . We explored the amyloidolytic activity under the condition of impaired endosomal localization and trafficking using two different BACE1 constructs (LL/AA [DDISAA] and D495R [RDISLL]) in cells either stably overexpressing full-length APP or APP-C99. Unlike Aβ40 and Aβ42 levels, which were approximately seven-and fivefold higher in APP-C99-overexpressing cells, respectively, compared to APP-overexpressing cells, Aβ34 levels were relatively similar in both cell types, which supports the results from APP-C99 and BACE1 co-transfected cells shown above (Fig. 2f) implying that BACE1 most likely is the limiting factor for amyloidolytic activity. At similar expression levels of wild-type BACE1 and of the mutant constructs ( Fig. 4a and f), relative levels of Aβ40 and Aβ42 remained unaltered when compared to the control ("Mock") ( Fig. 4c, d, h and i). In contrast, Aβ34 levels were reduced by ~ 55% (compared to wt) for both BACE1 trafficking mutants in APP overexpressing cells ( Fig. 4b) and by ~ 25% (LL/AA) and ~ 10% (D495R) in APP-C99 overexpressing cells (Fig. 4g). The observed effect was attenuated in APP-C99 overexpressing cells, likely due to an excessive supply of substrate, i.e., 10-(compare Fig. 4c and h) and sixfold higher levels (compare Fig. 4d and i) of Aβ40 and Aβ42, respectively. We verified the cellular localization of BACE1 mutants that impair endosomal trafficking 40 by immunocytochemistry (ICC). Briefly, wild-type BACE1 showed a punctate like staining ( Fig. 4e and j) which overlapped with both the early-endosome marker EEA1 (early-endosome associated protein 1) and the lysosome marker LAMP1 (lysosome-associated membrane protein 1) in both cell types 42,43 (Supplementary Fig. 3). Quantitative colocalization analyses showed a significantly reduced colocalization with early endosomal marker EEA1 and lysosomal marker LAMP1 for both BACE1 variants, LL/AA and D495R (Supplementary Fig. 3g and h), which is in agreement with previous reports 41,44,45 . Altogether, quantification and colocalization results suggest that Aβ34 is mainly produced within the endolysosomal compartments, and mutations altering BACE1 localization impair Aβ34 production due to mislocalization or a delayed transport of the mutant enzyme. PS2 γ-secretase but not PS1 complexes contribute to Aβ34 production. Literature indicates that numerous C-terminally truncated Aβ species are generated by the γ-secretase complex in a PS1/2-dependent manner [46][47][48] and that γ-secretase activity is required first to produce secreted Aβ species 17,47 which are then cleaved again by BACE1 to generate Aβ34. To dissect the roles of PS1-and PS2-containing γ-secretase complexes in Aβ34 generation, we performed titration experiments with small interfering RNAs (siRNAs) to silence PSEN1 or PSEN2 expression. In a double knockdown titration experiment with either decreasing or increasing amounts of PS1 or PS2 siRNA and vice versa (Fig. 5), the total siRNA amount was equivalent to 15 pmol. The gradual downregulation of PSEN1 or PSEN2 was verified by Western blot analysis (Fig. 5a-c). Aβ34, Aβ40 and Aβ42 levels were quantified by ELISA in cell media ( Fig. 5d-f) and by MSD in cell lysates ( Fig. 5g-i). A significant gradual decrease in Aβ34 levels was uniquely observed with decreasing PS2 while Aβ40 and Aβ42 levels remained unchanged in cell media ( Fig. 5df). The highest PS2 knockdown resulted in a significant reduction of Aβ34 by ~ 20% (Fig. 5d). In contrast, Aβ34, www.nature.com/scientificreports/ Aβ40 or Aβ42 levels remained constant in cell lysates (Fig. 5g-i). Notably, PS1 protein levels increased 1.5-fold above the levels yielded by controls upon gradual PS2 knockdown, likely as a compensatory reaction (Fig. 5b). www.nature.com/scientificreports/ This effect was specific for PS1 and not observed for PS2 since upon PS1 siRNA treatment, PS2 protein levels remained constant (Fig. 5c). Aβ34 levels were not affected by the compensatory increase of PS1 but surprisingly decreased with PS2 reduction. This result suggests that PS2-γ-secretase complexes possess a unique role in Aβ34 generation while PS1 is not involved. Differential localization of PS1 and PS2 was verified by ICC analysis. Our results showed that PS1 is present throughout the cell while PS2 displayed punctate localization. PS2 co-localizes with both EEA1 and LAMP1 whereas PS1 does not ( Supplementary Fig. 4). ICC quantification also showed a decrease in PS1 and PS2 expression upon PS1 and PS2 knockdown (15 pmol condition), respectively ( Supplementary Fig. 5). We verified the results of the combinatorial knockdown experiment with single knockdowns of either PSEN1 or PSEN2 (Supplementary Fig. 6). The gradual downregulation of PSEN1 or PSEN2 was analyzed by Western blot (Supplementary Fig. 6a-c and e-g) and Aβ34, Aβ40 and Aβ42 levels in cell media were quantified by ELISA ( Supplementary Fig. 6d and i). Downregulation of PSEN1 left Aβ34, Aβ40 and Aβ42 levels unaltered (Supplementary Fig. 6d) and PS2 levels did not change upon gradual PS1 knockdown as described above (Supplementary Fig. 6c). In agreement with data shown in Fig. 5b, PS1 levels showed an unexpected compensatory increasing trend upon gradual PS2 knockdown (Supplementary Fig. 6g). Similar to the combinatorial knockdown experiments (Fig. 5), a significant gradual decrease in Aβ34 levels was observed with decreasing PS2 while Aβ40 and Aβ42 levels remained unchanged (Supplementary Fig. 6h) and the highest PS2 knockdown resulted in an approximately 20% reduction of Aβ34 levels, confirming the result above that PS2-γ-secretase but not PS1 contributes to Aβ34 generation. Discussion An imbalance between the formation and elimination of Aβ peptides has been suggested as the trigger in the pathogenesis of AD 49,50 . However, the knowledge about proteolytic degradation of Aβ discovered to date is rather limited to the family of amyloid-degrading enzymes (ADEs) with both membrane-bound and soluble members including extracellular matrix metalloproteinases (MMP2 and MMP9), IDE, NEP and ECE 19,[51][52][53][54] . Previous reports showed that a cleavage between L34 and M35 of the Aβ sequence exerted by BACE1 produced the non-amyloidogenic Aβ34 peptide, a soluble and non-toxic C-terminally truncated degradation product of longer Aβ peptides 17,18,55 . Aβ34 thus differs from aggregation prone Aβ species deposited in AD brain tissue. We identified Aβ34 as an indicator of amyloid clearance since Aβ34 was elevated in individuals with mild cognitive impairment 20 . Moreover, a significantly decreased Aβ34/Aβ40 ratio was observed in microvessels from AD patients due to a reduced proteolytic degradation of amyloid peptides in AD 21 . Here, we provide mechanistic evidence in vitro and in vivo supporting a prominent role of BACE1 in Aβ clearance. Under conditions of either elevated levels of APP or of BACE1, Aβ34 production was only enhanced under a surplus of BACE1. Increasing amounts of BACE1 resulted in a dose-dependent increase in Aβ34 levels in all our experimental test systems. Specifically, the levels of Aβ34 depends directly on increased BACE1 levels in AD brain, i.e., Aβ34 levels were approximately twofold elevated in the brains of individuals with AD compared to non-demented controls and levels coincided with roughly twofold higher BACE1 levels in vivo. While increased BACE1 levels and its amyloidogenic activity in AD have been reported before [25][26][27][28] , the biological significance of BACE1 for amyloid clearance had remained enigmatic. We successfully confirmed an association between BACE1 expression and Aβ34 levels indicating amyloid clearance (i) in genetically modified mice where a single copy of the BACE1 gene (BACE1 +/−) halved cortical Aβ34 levels and (ii) in cell culture systems where the linearity between BACE1 and Aβ34 levels remained stable even at high BACE1/APP ratios. Thus, our findings provide an explanation for the previously reported and paradoxical inverse relationship for BACE1 expression and Aβ levels measured in in vitro and in vivo test systems under conditions of genetic and pharmacological manipulation of BACE1 expression 22,24,[56][57][58][59][60] . Further, we identified the endo-lysosomal system as the critical compartment for amyloidolytic cleavage of longer Aβ species into Aβ34 product. The finding that two BACE1 trafficking mutants known to impair endosomal trafficking 40 reduced Aβ34 levels while Aβ40 and Aβ42 levels remained unchanged is in agreement with reports that BACE1 activity is optimal at acidic pH in early endosomes and lysosomes 40,61,62 . Further, knockdowns of either the PS1 or the PS2 subunit showed that Aβ34 levels were specifically reduced upon PS2 knockdown. Thus, PS2-γ-secretase, rather than PS1, is involved in Aβ34 generation which is in full alignment with their reported cellular activities, as PS2 selectively cleaves late endosomal/lysosomal localized substrates and generates the prominent pool of intracellular Aβ peptides 15 . This assumption implies that Aβ peptide substrates are originating from PS2-γ-secretase complexes for BACE1 amyloidolytic cleavage. In contrast to the present study, we previously overexpressed wild type and a loss-of-function variant of PS1 and proposed that BACE1-generated Aβ34 was dependent on PS1 activity 17 . Then, this effect came from overexpressed PS1 likely "mislocalized" to the endosomal system, and we did not observe this dependency upon knockdown of the endogenous protein in this study. Thus, we propose that BACE1 amyloidolytic activity in the endo-lysosomal system might provide specificity and a spatial and temporal control of amyloid clearance through the BACE1-amyloidolytic-activity pathway. In agreement with this view, longer Aβ forms are more prone to aggregation in acidic compartments 63,64 requiring that clearance of Aβ40 and Aβ42 in acidic compartments is essential and must be highly effective. Thus, we here addressed the yet undefined role of BACE1 in amyloid degradation, explaining why increased BACE1 activity is not leading to increased amyloid production but decreased Aβ40 and Aβ42 levels in genetically modified mice. Our findings are of concern in the context of AD and current discussions to re-examine BACE1 inhibitors as therapeutic and preventive agents. The data indicates that the BACE1/APP ratio primarily affects the balance between BACE1-mediated Aβ production and degradation and highlights a critical role to BACE1 in amyloid clearance that has previously been neglected. Frozen samples from the temporal cortex from non-demented controls (n = 5) and confirmed Alzheimer disease Braak 4 to Braak 6 (n = 20) were prepared as previously described 65 . In brief, brain samples were thawed on ice, weighed and homogenized in buffer A (100 mM Tris-HCl, 150 mM NaCl, 2 × complete protease inhibitor cocktail (Roche)) using gentleMACS™ M Tubes/Dissociator at 4 °C (Miltenyi Biotech). TritonX-100, final concentration 1%, was added and samples were incubated for 1 h with agitation at 4 °C. Lysates were centrifuged at 10,621 rcf in a microfuge (Eppendorf) at 4 °C for 15 min to remove the nuclear fraction. Samples were measured with bicinchoninic acid assay (BCA assay, Thermo Fisher Scientific Inc., Pierce) and MSD assays. Cortices of transgenic mice expressing London APP and their wild-type littermates were provided by Dr. Claus Pietrzik's laboratory at the University of Mainz, Germany. Cortices of BACE1 +/+, BACE +/− and BACE1 −/− mice were provided by Dr. Paul Saftig's laboratory in University of Kiel, Germany. All mice were on C57BL/6 strain genetic background and were 6-months of age when sacrificed. Frozen mouse brains were thawed on ice, weighed, and homogenized in the homogenization buffer (100 mM Tris-HCl pH: 7.5, 150 mM NaCl and 2 × complete protease inhibitor cocktail (Roche)) using Dounce homogenizer. 10% Triton-X was added to the homogenates (final concentration: 1%). Brain homogenates were lysed at 4 °C for 1 h on a rotator. Lysates were centrifuged at 10,621 rcf in a microfuge (Eppendorf) at 4 °C for an hour to remove the debris. Supernatants were collected and diluted in the appropriate buffers for BCA, Western blot and MSD assays. Western blot analysis. Samples were prepared by adding LDS loading buffer and 2-Mercaptoethanol to the cell lysates according to the protocol provided by the manufacturer (Invitrogen). The proteins were solubilized and denatured by heating the samples to 70 °C for 10 min. Proteins were separated on 4-12% Bis-Tris gradient gels (Invitrogen) and were transferred to 0.45 µm nitrocellulose (Biorad) or polyvinylidene difluoride (PVDF) (Millipore) membranes at 400 mA at 4 °C for 2.5 h. Proteins were detected by the antibodies indicated in the antibodies section. The primary and secondary antibodies were used in phosphate-buffered saline. Signals were recorded on ImageQuant LAS 500 and LAS 600 (GE Healthcare Life Sciences). The primary antibodies used for Western Blot analysis were the following: anti-BACE1 1:2,000 dilution (monoclonal D10E5, Cell Signaling), anti-BACE1 1:2,000 dilution (B0681, Sigma-Aldrich), anti-actin 1:5,000 dilution (monoclonal mab1501, Millipore), anti-sAPPβ 1:2,000 dilution (IBL), and anti-APP ectodomain 22C11 1:10,000 dilution (Millipore), anti-flag 1:1,000 dilution (M2, F1804, Sigma-Aldrich), anti-PS2 (ab51249, Abcam), and anti-PS1 1:10,000 dilution (ab76083, Abcam). Mouse brain lysates. The secondary antibodies used for Western Blot analysis were the following: anti-mouse-and anti-rabbithorseradish peroxidase 1:10,000 dilution (Promega). Quantification of the Western Blots were performed with ImageJ and all protein levels were normalized to actin. All gels and blots used in figures are in compliance with the digital image and integrity policies (https:// www. nature. com/ srep/ journ al-polic ies/ edito rial-polic ies# digit al-image). Where cropped gels/blots are displayed, respective full-length gels and blots are included in a Supplementary Information file. Plates were blocked with 150 µl 5% MSD Blocker A solution for an hour at room temperature with gentle shaking and washed 3 times with 250 µl PBS-T (0.05% tween). Peptide calibrators were diluted in MSD Diluent 35. Plates were loaded with samples and calibrators together with SULFO-TAG™ 4G8 or 6E10 detection antibody diluted in MSD Diluent 100 and incubated overnight at 4 °C with gentle shaking. After three washes with 250 µl PBS-T, 150 µl 2 × MSD read buffer was added to the wells. Plates were read by an MSD QuickPlex SQ 120 Imager and data were analyzed by MSD Workbench® software. Confocal microscopy and image analysis. Single-or double-immunolabeled (Alexa Fluor-488, -568 or -647) samples were analyzed at the Imaging & Molecular Biology Platform (IMBP; McGill Life Sciences Complex) using a TCS SP8 multi-photon confocal microscope (Leica) with 63x/1.40 oil-immersion objectives (Leica, Wetzlar, Germany). Samples were excited with Coherent Chameleon Vision II multiphoton at 730 nm (2660 mW) for DAPI imaging. For each sample, 12-30 z-stack images were acquired using the same laser intensity settings for quantification. Z-stack images were processed using Image-J (Rasband, W.S., ImageJ, U.S. National Institutes of Health, Bethesda, Maryland, USA, https:// imagej. nih. gov/ ij/, 1997-2018) and total cell fluorescence was quantified with the analyze tool. To better visualize BACE1 localization, a heatmap was generated using Fire LUT in Image-J. The IMARIS Image Analysis Software (Bitplane (Oxford Instruments), MA, USA) software was used for cross-sectional analysis. BACE1 colocalization with EEA1 and LAMP1 were analyzed using ImageJ plugin JACoP 66 . Statistical analysis. For all experiments, different conditions were analyzed by one factor ANOVA (between subject design) or two factor ANOVA. Pairwise comparisons were performed either with Dunnet's or Tukey's post-hoc tests. The statistical analysis was run by GraphPad Prism 5. For human brain samples, Welch's t-tests were performed. Data that was not normally distributed (Gaussian normality test) were analyzed with non-parametric tests. Spearman's correlation was performed to test the correlation between BACE1 and Aβ34 levels. Ethics approval and consent to participate. Prior to starting the study, ethical approvals have been obtained. The study was conducted in accordance with Helsinki Declaration as revised in 2013 and performed in accordance with respective guidelines. The experimental protocols were approved by The Netherlands Brain Bank (NBB) where the brain post-mortem samples were obtained from, i.e. The Netherlands Institute for Neuroscience, Amsterdam (open access: www. brain bank. nl). All material has been collected from donors having provided written informed consent for a brain autopsy and the use of the material and for whom. Clinical information for research purposes had been obtained by the NBB. License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-02-08T15:24:46.773Z
2023-02-07T00:00:00.000
{ "year": 2023, "sha1": "72a3dfd630bac45e14e4aba3df18d077e0476692", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "72a3dfd630bac45e14e4aba3df18d077e0476692", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265305537
pes2o/s2orc
v3-fos-license
Patient experiences with SARS-CoV-2: Associations between patient experience of disease and coping profiles Introduction Severe acute respiratory syndrome coronavirus 2, (SARS-CoV-2,) caused an influx of patients with acute disease characterized by a variety of symptoms termed COVID-19 disease, with some patients going on to develop post-acute COVID-19 syndrome. Individual factors like sex or coping styles are associated with a person’s disease experience and quality of life. Individual differences in coping styles used to manage COVID-19 related stress correlate with physical and mental health outcomes. Our study sought to understand the relationship between COVID-19 symptoms, severity of acute disease, and coping profiles. Methods An online survey to assess symptoms, functional status, and recovery in a large group of patients was nationally distributed online. The survey asked about symptoms, course of illness, and included the Brief-COPE and the adapted Social Relationship Inventory. We used descriptive and cluster analyses to characterize patterns of survey responses. Results 976 patients were included in the analysis. The most common symptoms reported by the patients were fatigue (72%), cough (71%), body aches/joint pain (66%), headache (62%), and fever/chills (62%). 284 participants reported PACS. We described three different coping profiles: outward, inward, and dynamic copers. Discussion Fatigue, cough, and body aches/joint pains were the most frequently reported symptoms. PACS patients were sicker, more likely to have been hospitalized. Of the three coping profiles, outward copers were more likely to be admitted to the hospital and had the healthiest coping strategies. Dynamic copers activated several coping strategies both positive and negative; they were also younger and more likely to report PACS. Conclusion Cough, fatigue, and body aches/joint pain are common and most important to patients with acute COVID-19, while shortness of breath defined the experience for patients with PACS. Of the three coping profiles, dynamic copers were more likely to report PACS. Additional investigations into coping profiles in general, and the experience of COVID-19 and PACS is needed. Introduction Severe acute respiratory syndrome coronavirus 2, (SARS-CoV-2,) caused an influx of patients with acute disease characterized by a variety of symptoms termed COVID-19 disease, with some patients going on to develop post-acute COVID-19 syndrome.Individual factors like sex or coping styles are associated with a person's disease experience and quality of life.Individual differences in coping styles used to manage COVID-19 related stress correlate with physical and mental health outcomes.Our study sought to understand the relationship between COVID-19 symptoms, severity of acute disease, and coping profiles. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the agent responsible for coronavirus disease 2019 (COVID- 19), caused an influx of hospitalized patients with severe, acute disease world-wide [1].Acute disease presentation can be asymptomatic or severe with multi-organ failure requiring treatment in the intensive care unit.Some patients, regardless of initial disease severity, go on to develop post COVID-19 syndrome (PACS, also referred to as post-COVID condition and long COVID) [2,3] which effects them months to years after infection and requires ongoing care in the outpatient setting [4].PACS is characterized by a variety of persistent symptoms [5] such as fatigue and cognitive impairment (including memory loss and concentration difficulties).In addition, body systems such as cardiac and pulmonary systems can be affected [6][7][8].The lived experience of patients with persistent symptoms and how experience and symptoms might relate to one another is less well characterized. PACS occurs in adults and children with probable or confirmed SARS-CoV-2 infection, which persists months to years and cannot be explained by an alternative diagnosis [9,10].The long term symptoms and duration of PACS are difficult to predict and not clearly associated with severity of acute illness [4].Efforts to understand and treat PACS reveal that the syndrome affects each individual differently and risk factors are heterogeneous but may include older age, sex, and race and ethnicity [6].PACS is classified as a chronic disease due to the duration of symptoms and wide-spread impact of PACS syndrome across race, gender, and age.In July, 2021, PACS with physical or mental symptoms lasting longer than 6 months which affect one or more aspects of daily activities is considered a disability under the Americans with Disabilities Act [11]. Individual experiences of disease are influenced by factors such as sex or behavior traits (coping styles) and such characteristics can be associated with a person's experience of disease [12].For example, men are more likely to test positive and be hospitalized with COVID-19, whereas women are more likely to self-report symptoms of PACS [13,14].Patient factors, like advanced age and comorbidities, are associated with an increased risk for severe, acute disease [15,16].Data suggest that some racial and ethnic groups may be differently impacted by PACS with, for example, a higher prevalence found in people of Hispanic/Latino ethnicity [17][18][19].Understanding how individual factors impact the COVID-19 experience could be important to its treatment.The COVID-19 pandemic also revitalized interest in coping strategies [20][21][22].Individual differences in coping styles and strategies used to manage COVID-19 pandemic related stress correlate with physical and mental health status (such as anxiety, depression and quality of life) [21,23]. This diversity of presentation makes it crucial to incorporate patient reports to understand the experience and lasting effects of the disease.In addition, evidence based frameworks of stress and coping demonstrate that individuals under stress assess their stressful circumstances, harness internal resources, and engage in coping strategies to deal with stress [24].The majority of research has focused on coping with the pandemic at large and not on coping with the experience of COVID-19 or PACS.Our study sought to address this gap and understand the relationship between COVID-19 symptoms, severity of acute disease (indicated by area of hospitalization) and coping profiles.In addition, we were interested if there are coping profile patterns that related to the lived experience of acute COVID-19 and/or PACS. Methods In collaboration with our Patient Family Advisory Council (PFAC) we designed this study.Semi-structured interviews with concept elicitation were used to prompt symptoms experienced before, during, and after hospitalization as well as specific details of the COVID-19 experience, including what occurred during an average day in the hospital for hospitalized patients.Two cohorts of semi-structured interviews were completed.All semi-structured interviews were conducted 2-8 weeks after hospital discharge through a secure web platform including the participant and a member of the research team.A total of fifty-six patients who had a positive SARS-CoV-2 test and a diagnosis of COVID-19 viral illness completed the interviews.Interview exclusion criteria included prior cognitive impairment or significant mental illness impairing ability to participate in an interview, residence in a medical institution at the time of hospital admission, lack of stable domicile and/or not willing to share contact information, incarceration, and known or suspected pregnancy. Using the semi-structured interview data from the first 26 patients, we designed a survey instrument to assess symptoms, functional status, and recovery in a larger group of patients.This interview data was used to develop a ranking activity to allow patients to identify symptoms that they experienced, their most bothersome symptoms, and symptoms most important to be free from, and ranked a subset of symptoms by the degree to which the symptoms characterized their experience of COVID-19.The online survey also included validated questionnaires; the Brief-COPE to assess coping responses [25] and the Social Relationship Inventory (SRI) which was adapted in our prior work for clinical contexts in an effort to briefly assess complexity in social relationships [26,27].A second group of 30 patients (Fig 1 ) were recruited to complete the semi-structured interview and the online survey to evaluate the survey for completeness and clarity.Data from the second group was used to check the ranking questions that assess most important, common, and bothersome symptoms to make sure that the list generated from the data of the first group was not missing important symptoms and would reflect patient centered outcomes.We also assessed the time required to complete the survey to ensure the online survey was accessible and easy to navigate. Online survey and study population Study patients were recruited from national research panels.An ethnically diverse group of COVID-19 survivors, aged 18-85 from around the United States, were recruited.The participants were screened and considered eligible if they reported a positive SARS-CoV-2 PCR and a COVID-19 related healthcare utilization (outpatient primary or urgent care, no hospital, inpatient hospital stay, or emergency department visit).Demographic data included age (grouped 10-year intervals (18-65+), and education level (less than high school, high school/GED, some college, associate degree, bachelor's degree, master's degree, doctorate/professional degree).The survey was administered using the Qualtrics platform, a nationally recognized consumer experience software.The online survey can be found in the Online Data Supplement (ODS). Ethical considerations The Intermountain Health Institutional Review Board approved the study and all procedures (IRB number 1051610).Informed consent was obtained for all participants prior to participating in the online, semi structured interviews.For the online survey, a cover letter of explanation was included at the start of the Qualtrics survey to inform potential participants that completing the survey implied consent.All data was securely stored using the Qualtrics secure web platform and then transferred to a secure REDCap system database [28,29].The interview participants received a $100 gift card as compensation for their time and effort.Survey participants received incentives through their contact with Qualtrics.The study was supported by Gilead contract number 2021029. Data and statistical analysis Participants whose survey responses did not include answers to demographics questions, the Brief-COPE, or SRI questions were excluded from analysis.Participants that repeatedly marked the same answer for each question in the Brief-COPE section of the survey were excluded from analysis.Descriptive statistics were calculated as median (interquartile range) or count (%) as appropriate. Cluster analysis, employing k-means clustering, was used to identify coping profiles based on patterns of responses to the strategies patients reported using to cope with COVID-19 related stress measured by the Brief-COPE, and quality of social support with person for whom they had close relationships was measured by SRI [30].This analytic approach allows for identifying subgroups defined by similarities among multiple dimensions of interest, which is an improvement over examining coping strategies discretely [31].To prepare the data for clustering analysis, the Brief-COPE and SRI responses were normalized between -1 and 1, thus weighting each question equally by the Euclidian distance calculation method used by kmeans clustering.Silhouette analysis, which identifies the number of clusters with the lowest misclassification rate, was used to determine the optimal number of clusters, while maintaining adequate group size of no less than 15% of the participants.Once the optimal number of clusters was determined, the first two components of a principal component analysis were inspected to verify adequate separation between the groups, indicating unique coping profiles were indeed identified with the k-means clustering approach.We reviewed the emotional, behavioral, and social relationships of the clusters to describe them with a label.Descriptive statistics of self-reported demographics and symptoms were presented by coping profile and PACS status. Chi squared test and Fisher's exact test in cases where cell counts were less than 5, were used to test for differences between groups.Prior to performing statistical tests by PACS status and coping profiles, categories for age, gender, race/ethnicity, education, and religion were condensed into fewer categories.Statistical significance was set to 0.05.Due to the exploratory nature of the analyses, we did not correct for multiple comparisons.All statistical analyses were conducted using R 4.0.3. Demographics We recruited a total of 17,271 individuals from the Qualtrics research panels.Of those, 1194 met study inclusion criteria.There were 218 (18%) of the respondents that did not interact with the survey in a meaningful way, resulting in 976 (82%) participants that were included in the analysis (Fig 1). The majority of participants were ages 25 to 54 years (57%), and the most common education levels were high school/GED or some college (47%).Sixty one percent were female, 67% were non-Latino white and 16% were Black/African American with 12% Hispanic/Latino.Most (87%) respondents had health insurance.Fifty percent of participants did not require an interaction with a hospital facility, 27% visited the emergency department (ED), and 23% were admitted to the hospital (general hospital or intensive care unit [ICU]) (Table 1). The ranking activity asked participants to rank which symptom best described their COVID-19 experience.The symptom most frequently reported in the position that best describes the COVID-19 experience was shortness of breath (36%) (Fig 2A). Participants with PACS Of participants with self-reported PACS, 64.4% (183/284) had a time interval between their positive COVID-19 test and when they took the survey of 90 or more days which meets the current criteria for PACS [3].Respondents with PACS were more likely to be, female, very religious, and to have required an emergency department visit or hospitalization during their COVID-19 infection (Table 3). Based on the ranking activity, the symptoms best describing the participants overall COVID-19 experience were different between participants with PACS and those without PACS (P<0.001).Participants with PACS reported shortness of breath at a higher rate and cough at a lower rate than non-PACS respondents (Fig 2B). Coping profiles and patterns of coping Silhouette analysis based on misclassification rates from discriminant analysis identified three clusters as the optimal number of groups.Outward copers (n = 284, 29%), when coping with COVID-19 illness stress these participants used more active coping strategies and reported receiving emotional and instrumental support from others.They reported higher levels of active coping (e.g., "I've been concentrating my efforts on doing something about the situation I'm in") and drawing on their social networks for support.They also used relatively high levels of positive reframing, planning, humor, acceptance, and religion as coping strategies.This group reported relatively low levels of behavioral disengagement, substance use, or self-blame.In general, their coping could be described as very active, harnessing social support from others, and low in strategies associated with poorer mental or physical health (substance use, self-blame).They rated their close relationships as relatively high in helpfulness and low in upset or unpleasantness. Inward copers (n = 358, 37%) when coping with COVID-19 illness stress, were less likely to receive emotional or instrumental support from others, and were also less likely to use planning, humor, or religion compared to other coping profiles. The dynamic copers (n = 334, 34%) reported high levels of most coping strategies, both adaptive strategies such as acceptance, active coping, and less adaptive strategies such as substance use and denial.These patients reported more upset and unpleasantness in their relationships with close others compared to other profiles.These dynamic copers were using a broad variety of coping strategies. Coping profiles-Demographics and symptoms Outward copers were characterized by female gender, white race, aged 55 plus, education beyond high school and very religious.Inward copers were nearly equally split between education levels.However, inward copers, also contained a higher proportion of patients reporting white race, those reporting no religiosity, and having received care outside of the hospital setting during their original illness.Dynamic copers when compared to inward and outward copers, were younger, male gender, belonged to a racial or ethnic minority, college graduates, and more likely to have visited the emergency department or be admitted to the hospital during their COVID-19 illness (Table 5). Symptoms also differed by coping profile.In selections of symptoms experienced (multiple selections permitted), the top five most common symptoms reported across all three coping profiles, although ranked differently by the profiles, were cough, body aches/joint pain, fatigue, fever/chills, and headache.When investigating symptom reports by profile, a higher proportion of outward copers reported symptoms in each of these 5 symptom groups (70%-88%) dynamic copers reported each of these 5 symptom groups in the lowest proportion (51%-60%), and inward copers reported each of these 5 symptoms groups at intermediate levels (61%-78%).Dynamic copers reported that chest pain was a bothersome symptom at higher rates (25%) compared to outward or inward copers (P<0.001).A higher proportion of outward copers reported fatigue (39%) and shortness breath (27%) as bothersome compared to dynamic and inward copers.Dynamic copers reported PACS at a higher rate than either inward or outward copers (36%; p = 0.005) (Table 6). All coping profiles ranked shortness of breath as the symptom that best defined their COVID-19 experience.Dynamic copers indicated that chest pain best characterized the COVID-19 experience at a higher rate than the other two profiles.However, inward and outward copers chose cough and fatigue at higher rates when compared to dynamic copers when asked about symptoms that described their COVID-19 experience (Fig 4). Discussion This study contributes to the growing body of literature describing both acute and chronic symptoms of COVID-19 in an ethnically and racially diverse group of Americans.We uniquely centered the experience of COVID-19 around participant prioritized symptoms and explored how coping profiles related to the experience of COVID-19 disease and PACS.The most common symptoms reported by our diverse cohort were cough, fatigue, and body aches/ joint pains.PACS was reported by 29% of our cohort and those who reported PACS were more likely to be bothered by chest pain and brain fog than participants without PACS.Importantly, participants with PACS were also more likely to report being hospitalized (including ICU) or visiting the emergency department than those who did not report PACS.The most common symptoms overall for the PACS group were like those reported by the entire cohort; however, symptoms of gastrointestinal issues, trouble sleeping, and cognitive symptoms were all more likely to be reported by individuals with PACS than those without PACS.Interestingly, patients with PACS ranked fatigue as a bothersome symptom less frequently than those who reported acute COVID-19 recovery (i.e., those who did not report PACS).This is in contrast to other reports of non-hospitalized patients with PACS who reported anosmia, fatigue, and shortness of breath as persistent symptoms [32].A recent Israeli study investigated outcomes of patients with mild COVID-19 and found anosmia, dysgeusia, memory impairment, dyspnea, and weakness were reported as long lasting symptoms [33].In our study, patients with PACS were more likely to be admitted to the hospital.This suggests that severity of a person's COVID-19 experience may shape the symptoms rated as most bothersome.Participants with PACS reported that shortness of breath was the most defining symptom of their COVID-19 experience at a higher rate when compared to the acute COVID-19 participants who rated several symptoms more equally as defining their overall COVID-19 experience.Women were disproportionately represented in the PACS group, as were respondents who reported being very religious.The relationship between the most bothersome COVID-19 symptoms and individual characteristics is unclear and remains an area for additional investigation.We also found three distinct coping profiles (inward, outward, and dynamic) based on the Brief-COPE and the SRI and explored how coping profiles related to the COVID-19 experience.Symptom patterns were relatively similar across coping profiles with all profiles reporting the same top 5 symptom groups within the coping profile.Chest pain was disproportionately to be reported most bothersome by dynamic copers compared to the other two profiles.Overall, however, our findings appear to suggest that symptom patterns are more different by PACs vs. acute COVID-19 patients rather than by coping mechanism.It is important to note that in our cohort PACS is associated with more severe initial disease (based on hospitalization status).Alternatively, most recent reviews suggest that it is not clear whether acute disease severity is more or less likely to relate to PACS [34,35]. Coping strategies include action-focused (taking action to change a situation, planning what to do next), emotion-focused (positively reframing the situation, using humor), and strategies sometimes described as harmful (blaming oneself, mental disengagement) [36,37].The outward copers had the arguably healthiest coping strategies.Hospitalization and fatigue are both associated with poor psychological outcomes, and poor psychological outcomes are often associated with more inward coping profiles [38].Despite a disproportionate likelihood of fatigue compared to other profiles and of hospitalizations compared to inward copers, the outward copers marshaled healthy coping mechanisms indicative of positive action and planning. The dynamic coping profile emerged as a new cluster of coping strategies not clearly described previously [27].This group is interesting as they are more likely to be male, younger, and to be Black and Hispanic/Latino than the other two profiles.They activate many coping strategies that are both positive and negative.The dynamic coping group reported higher frequency of PACS than the patients in the other two coping profiles.Potentially, dynamic copers may be testing out different coping strategies to try and find one that works best to manage this new change to their lives.The COVID-19 illness may be one of the first, if not the only major health challenges they have faced and therefore this group may not have previously required or discovered clear coping mechanisms.In addition, cultural changes and the way general media, social media, and technology have evolved over the past few years may have changed the ways in which younger patients cope with illness [39].Younger people coping with the pandemic may also exhibit more distress relating to the use of less adaptive coping strategies [40].Furthermore, recent data about examining coping profiles longitudinally during the COVID-19 pandemic suggest coping strategy pattern changes may be relatively common across time [41]. In our prior work we report associations between coping profiles and decision making related to chronic illness [27].In this study we extended this work to understand coping profiles in relation to COVID-19 experiences and reports of PACS.Coping mechanisms and individual characteristics can influence experiences with a disease and quality of life [42][43][44].Dynamic copers, who were comparatively high on using maladaptive coping strategies (like substance use) were the coping group most likely to report having PACS.Outward copers had relatively high levels of hospitalization (compared to inward copers) but were using few maladaptive and more adaptive coping strategies.This is consistent with models of resilience and post-traumatic growth [45].Additionally, outward copers were likely to be older and may have learned more adaptive strategies to marshal over time.Recent data examining coping specifically in patients who have COVID-19 are demonstrating that coping strategies for those with PACS may promote improved health agency [46].In one study of patients hospitalized with COVID-19, indicating a relatively severe acute illness, participants stated that COVID-19 was often a health turning point, and that a sense of profound gratitude, a change in outlook, and engagement in healing behaviors (e.g., exercise) accompanied recovery from COVID-19 [47].This is supported in our data by outward and dynamic copers use of adaptive coping strategies. The association between coping profile and the self-reported experience of PACS is an area for additional investigation.The link between coping profiles and patient experience may help clinicians develop integrative treatments that are more patient rather than population centered.Much like genetic patterning and the body's response to SARS-CoV-2, it is possible coping profiles contribute to a body's physiologic response to disease, treatment, and experience. Limitations to this study include those that are common to survey investigations.Our primary data collection and survey development occurred in the early stages of the COVID-19 pandemic and were focused on symptoms.Other important diseases, such as pulmonary embolism, were not addressed in this study.The results rely on the participant's lived experience which may be affected by participation bias and social desirability; however, our large cohort is a representative US sample with a high response rate which minimizes bias.Second, the presence of PACS was not confirmed by qualified medical diagnostics and 36% of patients in our study did not meet the current diagnostic criteria for PACS.However, given the recent emergence of a formal definition of PACS and the imminent focus on patient-centered outcomes, the fact that some patients self-identified as having the syndrome is telling.This work is largely exploratory and may suffer from statistical limitations regarding multiple comparisons.Comparisons in the exploratory analysis are not definitive but do provide fertile ground for further investigations into the experience of PACS and individual traits like coping profiles. Conclusion Cough, fatigue, and body aches/joint pain are consistently among the most important symptoms reported by patients with acute COVID-19 or PACS.Participants with PACS compared to those with acute COVID-19 reported GI symptoms and cognitive symptoms at disproportionately higher rates.Participants with PACS also ranked fatigue as the symptom that defined their disease experience less frequently and they reported shortness of breath as the most defining symptom of their COVID-19 experience.Three distinct coping profiles were used by participants in our cohort, outward coping, inward coping, and dynamic coping.Outward copers reported fatigue and cough at the highest rates yet were marshaling generally healthy coping strategies.Dynamic copers were unique as they reported many positive and negative coping mechanisms, were more likely to be young, male, Black, Hispanic/Latino, and more likely to self-report PACS.Further investigations into coping profiles in general, especially the dynamic coping profile, and the experience of COVID-19 related long term illness is warranted.
2023-11-22T05:07:36.257Z
2023-11-20T00:00:00.000
{ "year": 2023, "sha1": "de78c339680425b51c23bfb93df7d8c583fb345d", "oa_license": "CC0", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "de78c339680425b51c23bfb93df7d8c583fb345d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253451032
pes2o/s2orc
v3-fos-license
Comparisons of healthy human brain temperature predicted from biophysical modeling and measured with whole brain MR thermometry Brain temperature is an understudied parameter relevant to brain injury and ischemia. To advance our understanding of thermal dynamics in the human brain, combined with the challenges of routine experimental measurements, a biophysical modeling framework was developed to facilitate individualized brain temperature predictions. Model-predicted brain temperatures using our fully conserved model were compared with whole brain chemical shift thermometry acquired in 30 healthy human subjects (15 male and 15 female, age range 18–36 years old). Magnetic resonance (MR) thermometry, as well as structural imaging, angiography, and venography, were acquired prospectively on a Siemens Prisma whole body 3 T MR scanner. Bland–Altman plots demonstrate agreement between model-predicted and MR-measured brain temperatures at the voxel-level. Regional variations were similar between predicted and measured temperatures (< 0.55 °C for all 10 cortical and 12 subcortical regions of interest), and subcortical white matter temperatures were higher than cortical regions. We anticipate the advancement of brain temperature as a marker of health and injury will be facilitated by a well-validated computational model which can enable predictions when experiments are not feasible. it is largely simplified and based on uniform arterial perfusion, without consideration of advective heat flow within blood vessels or blood circulation through veins 35 . Chen and Holmes improved upon Pennes' approach by adding directional blood flow to model advective heat transfer and convective heat exchange between tissues and blood vessels, and identified thermally significant blood vessels (i.e., vessels not in thermal equilibrium with surrounding tissue) 36 . Vasculature modeling was further improved by incorporating vessel curvature and branching in the discrete vascular algorithm (DIVA) model [37][38][39] . More recently, Shrivastiva and Roemer developed a model for perfused tissue based on principles of mass and energy conservation 40 , which was refined to reduce the high computational load 41 and validated in a porcine model 42 . Our group recently demonstrated an improved model, building upon prior approaches [43][44][45] , which ensures local mass and energy conservation in formulation of the governing equations and is capable of personalized brain temperature predictions using individual input data from each subject. Given the importance of brain temperature in assessment of both health and disease, and a lack of methods to routinely evaluate whole brain temperature in the clinical setting, a biophysical model capable of subject-specific predictions was developed in our previous pilot study. Comparison of model-predicted and MR-measured temperatures in a small healthy cohort was performed; however, complete statistical analysis and generalization were limited by the small sample size (N = 3) 46 . The goal of the present study is to evaluate the predictive power of our biophysical model 46 applied in a larger cohort by performing regional analysis to compare thermal gradients and temperature patterns with those measured using whole brain chemical shift thermometry. Chemical shift thermometry was used for experimental temperature measurements as it is the only MR method capable of providing absolute and voxel-wise temperature values necessary for direct comparison with temperatures predicted using our model 18,20 . We anticipate the advancement of brain temperature as a marker of health and injury, particularly after brain injury or ischemia, will be facilitated by both experimental measurements and a well-validated computational model, which can enable predictions when experiments are not feasible. Study participants and MR acquisition. This prospective study was approved by the local Institutional Review Board and all subjects provided written informed consent. All procedures were performed in accordance with the relevant guidelines and regulations. Inclusion criteria were medically healthy individuals of both sexes, any race, and any ethnicity, between 18 and 45 years old to avoid white matter changes due to age. Exclusion criteria were a history of neurodegenerative disease, epilepsy, ischemia, central nervous system surgery, moderateto-severe traumatic brain injury, or contradictions to MR imaging. MR data was collected from 30 healthy volunteers, 15 males and 15 females (mean ± standard deviation [SD] age: 26 ± 4 years old; range 18-36 years old). Five participants self-reported their race as African-American or Black, 11 as Asian, and 14 as White or Caucasian. Two participants who identified as White also self-reported their ethnicity as Hispanic; all other participants self-reported their ethnicity as non-Hispanic. MR data was acquired on a 3 T MR scanner (PrismaFit, Siemens, Erlangen, Germany) using a 32-channel phased array head coil. A T1-weighted magnetization-prepared rapid gradient-echo (MPRAGE) sequence was used to acquire high resolution structural images (repetition time 2 , matrix size = 256 × 256, slice thickness = 0.62 mm, GRAPPA acceleration factor = 2, acquisition time = 3 min 32 s). MRA acquisition covered the major arteries including the circle of Willis (slab thickness = 80 mm) and a saturation band was applied over the acquisition block to limit venous contamination. MR venography (MRV) was collected using a 2D TOF sequence (TR/TE/FA = 18 ms/3.79 ms/60°, FOV = 220 × 220 mm 2 , matrix size = 256 × 256, slice thickness = 3.0 mm, GRAPPA acceleration factor = 2, acquisition time = 2 min 44 s). Echo-planar spectroscopic imaging (EPSI) with manual B 0 shimming was acquired and used for MR thermometry as previously described 46,47 (TR1/TR2/TE = 1551/511/17.6 ms, FA = 71°, FOV = 280 × 280 × 180 mm 3 , k-space sampling = 500 points with 1250 Hz spectral bandwidth, 50 × 50 voxels, 18 slices, nominal resolution = 5.6 × 5.6 × 10 mm 3 , interpolated resolution = 4.4 × 4.4 × 5.6 mm 3 (64 × 64 × 32 data points), acquisition time = 15 min 17 s). The same center slice position and orientation were used for both the EPSI and T1-weighted image acquisition to facilitate image registration. A saturation band was placed over the sinuses and cavity regions to avoid contamination of neighboring voxels (Fig. S1). Lipid inversion nulling (TI = 198 ms) was performed, and interleaved non-water suppressed and water-suppressed (using the chemical shift selective suppression sequence (suppression bandwidth = 35 Hz)) scans were acquired. Axillary temperature was recorded at three time points during the EPSI sequence (at the start of the scan, at 8 min, at the end of the scan) using a fiber optic temperature sensor (OTG-MPK5, Opsens) placed underneath the left arm. The average axillary temperature for each subject was used to estimate the effect of using subject-specific inlet arterial temperatures in the model. Inlet arterial temperatures for each subject were approximated as the pulmonary artery temperature, calculated using the previously published relationship between axillary (ax) and pulmonary artery (PA) PA temperatures (T PA = T ax + 0.47 °C) 48 . Whole brain MR thermometry. MR-measured brain temperature maps were generated using the metabolite imaging and data analysis system (MIDAS) 47,49 . Pre-processing included eddy current correction, zeroorder phasing, and correction for B 0 field inhomogeneity, followed by Fourier transform, spectral denoising using principal component analysis, and spectral fitting using FITT 2.1 in MIDAS 50 . Temperature maps were calculated using the chemical shift difference between water and N-acetylaspartate (NAA), correcting for frequency differences between gray matter (GM) and white matter (WM) 46 Biophysical modeling of brain temperature. Whole brain simulated temperature maps were generated using our biophysical model with MR input data for each individual subject as previously described in detail 46 . Briefly, intra-domain (within blood vessels and within tissue voxels) and inter-domain (between vessels and voxels) cerebral blood flow (CBF) rates are calculated using bioheat transfer equations. The values for heat generation (e.g., metabolic rates) and thermophysical properties (e.g., tissue density, thermal conductivity, specific heat of tissue and blood) were the same as previously reported 45 . These properties were aggregated from early bioheat transfer models [57][58][59] and derived from in vivo or ex vivo experiments. Voxel-wise physical properties were then calculated by applying these a priori data to MR-derived tissue probability maps (TPMs) generated from segmented T1-weighted images 45,46 . Brain temperature is calculated by solving the discretized form of the governing equations as described in our previous pilot study 46 . To combine tissue and vascular information in the same model, MRA and MRV data were also transformed to T1-weighted image space in SPM 12 (see "Whole brain MR thermometry" section) 55,60 , followed by a power-law transformation to enhance image contrast. Automatic vessel segmentation was facilitated with Rivulet 61,62 . Due to the limited spatial resolution of MRA and MRV, additional fine blood vessel segments were augmented to the originally segmented vessel tree using a rapidly exploring random tree (RRT) algorithm as previously reported 46,63 . It was assumed that more dense vasculature will exist where higher CBF is observed, and branched nodes were generated based on the CBF map estimated from TPMs and a priori CBF values in gray and white matter. Input parameters were selected as follows: inlet arterial temperature was fixed to 36.8 °C (the median between carotid artery temperature [36.6 °C] 64 and brain tissue temperature [37.0 °C] 65 in humans) or approximated for each subject using pulmonary artery temperature (see "Study participants and MR acquisition" section); and terminal capillary diameter was set to 7 μm based on previous studies 66,67 . A schematic of the input data generation and modeling procedure is shown in Fig. 1. All input data (T1-weighted images, MRA, and MRV) were used to determine model-predicted temperature independently of the MRS (EPSI) data used for MR-measured temperature calculations. Regional analysis. Regional analysis was conducted using both subject-specific and group-averaged temperature maps. For regional comparisons at the subject level (in resampled T1-weighted image space, 1 mm isotropic voxels), cortical and subcortical regions were parcellated in FreeSurfer 6.0 (http:// surfer. nmr. mgh. harva rd. edu/) using the Desikan-Killiany atlas 68 and Gaussian classifier atlas 69 . Ten cortical regions (left and right frontal lobes, temporal lobes, parietal lobes, occipital lobes, and insula) and twelve subcortical regions (left and right cerebral WM, cerebellar WM, cerebellar cortex, thalamus, putamen, and pallidum) were used for analysis. Atlas regions with any dimension smaller than the original EPSI voxel size (4.4 × 4.4 × 5.6 mm 3 ) were excluded (e.g., cingulate, hippocampus, etc.) to reduce partial volume effects. To facilitate group-averaged comparisons, MR-measured temperature maps for all subjects were registered to Montreal Neurological Institute (MNI)152 space (2 mm isotropic voxels) in MIDAS 70,71 . Model-predicted temperature maps generated in T1-weighted image space were transformed to MNI152 space in SPM 12 55,60 . MNIregistered temperature maps (both MR-measured and model-predicted) were smoothed using a 3D Gaussian Figure 1. Overview of our subject-specific biophysical model used to generate personalized whole brain temperature maps. Input MR data (T1W, MRA, MRV) is acquired from each subject and used in the model. Model-predicted temperatures are compared to MR-measured temperatures acquired independently with MR thermometry using EPSI. T1W T1-weighted MR images, MRA MR angiography, MRV MR venography, EPSI echo planar spectroscopic imaging, RRT rapidly exploring random tree, T model Model-predicted brain temperature, T MR MR-measured brain temperature. www.nature.com/scientificreports/ filter (kernel size = 5 × 5 × 5 voxels, sigma = 1) in the spatial domain, averaged for all subjects, and segmented into 8 lobar regions (right and left frontal, parietal, temporal, and occipital lobes (Fig. S2)) using the built-in segmentation tools in MIDAS 72,73 . Comparison of model-predicted and MR-measured brain temperatures. As in our previous work 46 , we used a threshold of 0.8 °C based on the uncertainty of absolute temperature measurements in a phantom study performed using the same scanner 28 . For voxels satisfying all four criteria for spectral quality control (see "Whole brain MR thermometry" section), the percentage of within-threshold voxels was calculated for each subject. As the range of temperatures in model predictions was narrower than that of MR-measured temperatures, Z-scores were calculated as Z i,j = [T i,j − mean(T i,j )]/SD(T i,j )), with T i,j being one of 660 data points in region i of subject j for either MR-measured or model-predicted temperatures, and used for comparison. Bland-Altman plots were constructed for all cortical and subcortical regions and used to visually compare model-predicted and MR-measured temperatures. Values are reported throughout as the mean ± SD unless otherwise noted. IRB statement. This study was approved by the Emory Institutional Review Board, and all subjects provided written informed consent prior to participation. Results Voxel-wise agreement between model-predicted and MR-measured temperatures. Of Regional agreement between model-predicted and MR-measured temperatures. As voxel- wise agreement was high, regional analysis was performed to facilitate further comparison of thermal gradients. For all regions, mean absolute differences were within the agreement threshold of 0.8 °C (Fig. 2, Table S1). Mean differences in model-predicted and MR-measured temperatures among cortical regions were all within the agreement threshold and ranged from 0.13 °C in the left frontal lobe to 0.27 °C in the left insula. In subcortical regions, differences ranged from 0.11 °C in left cerebral WM and 0.54 °C in left cerebellar WM. Unlike cortical regions, several subcortical regions (left/right cerebellar WM, cerebellar cortex, putamen, and pallidum) had maximum absolute temperature differences exceeding the agreement threshold of 0.8 °C. Left hemispheric temperatures were 0.03 °C and < 0.01 °C higher than the right hemisphere from model-predictions and MR measurements, respectively. Subcortical regions were 0.09 °C and 0.07 °C higher than cortical regions for model-predicted and MR-measured temperatures, respectively. Regional averages of MR-measured temperatures spanned a larger range (35.85-39.18 °C) compared to model-predictions (36.95-37.28 °C) for all subjects. Across all regions and subjects, the majority (94.4%) of regional temperature-derived Z-scores between model-predictions and MR measurements were within the limits of agreement in Bland-Altman analysis (Fig. 3). While minimal bias was observed across subjects (Fig. 3), bias was observed in some brain regions (Fig. S4). Model-predicted and MR-measured temperatures were largely similar in cerebral WM. MR-measured temperatures were higher than model-predictions in cerebellar regions, and the opposite trend was observed in putamen and pallidum regions. Global brain temperature patterns. To characterize global patterns in healthy brain temperature, whole brain model-predicted and MR-measured temperature maps in standardized MNI-space were investigated. While voxel-wise and regional comparisons showed good agreement at the subject level, global analysis facilitates characterization of expected biophysical trends, e.g., as a function of tissue type and brain region. Qualitatively, subcortical WM regions tend to have relatively higher temperatures compared to other regions in both MR-measured and model-predicted temperature maps (Fig. 4). Quantitatively, the lowest and the highest MR-measured temperatures were observed in the right frontal lobe (36.87 °C) and left parietal lobe (37.12 °C), respectively (Fig. 5A,B). From model-predictions, the lowest and the highest temperatures were observed in left temporal lobe (37.06 °C) and right parietal lobe (37.10 °C), respectively (Fig. 5C,D). The maximum absolute difference between MR-measured and model-predicted temperatures was observed in the right frontal lobe with a value of 0.22 °C, likely due to suppression of signal in portions of the frontal lobe (see "Methods" section). The minimum absolute difference was observed in the left temporal lobe (0.01 °C), indicating a region with high consistency between model-predictions and MR measurements. Average venous temperatures in the internal jugular vein from model-predictions was 37.0 °C across all subjects, 0.2 °C higher than the input arterial temperature (36.8 °C) used in the model. Mean arterial temperatures calculated from experimentally-measured axillary temperatures for all subjects were 36.80 ± 0.55 °C, similar to the fixed input arterial temperature (36.8 °C) used in the model. The use of a subject-specific inlet arterial temperature (estimated using pulmonary arterial temperature; see "Study participants and MR acquisition" section) in the model resulted in a maximum bias for whole brain temperature of ~ 0.5 °C and did not alter spatial thermal gradients. Table S1. Discussion High overall agreement was observed between model-predictions and MR measurements, however, variations in temperature ranges and spatial temperature patterns were observed. Regional analysis revealed similar modelpredicted and MR-measured temperatures across most brain regions, with the lowest difference (0.11 ± 0.07 °C) observed in left cerebral white matter. While mean differences between model-predictions and MR measurements were within the agreement threshold (0.8 °C) for all regions, the largest differences in subject-specific regional analyses were observed in left and right cerebellar WM (Table S1) likely due to portions of the suppression band covering much of the cerebellum. Maudsley et al. previously reported reproducibility errors in regional (lobar-scale) MR-measured temperatures of 0.2 °C 51 . From our group-averaged, lobar-scale comparisons, the maximum absolute difference between model-predictions and MR measurements was 0.22 °C (right frontal lobe). This suggests at the lobar-scale, agreement between MR measurements and model-predictions is on the same scale as reproducibility errors in whole brain MR thermometry. Within each method (i.e., MR measurements and model-predictions), regional differences were observed. The highest temperatures were observed in parietal lobes for both MR measurements and model-predictions. For MR measurements, higher temperatures were observed in the left parietal lobes (Fig. 5A,B); further exploration of regional differences is warranted. The lowest measured temperatures in the right frontal lobe (Fig. 5A,B) are attributed, at least in part, to the limited number of voxels in the frontal lobe as many were excluded due to the saturation band. For model-predictions, highest temperatures were observed in the right parietal lobe (Fig. 5C,D) as group-averaged GM/WM ratios were lowest in parietal lobes, resulting in the lowest CBF 74,75 and less heat removal by blood circulation. Similarly, the lowest model-predicted temperatures observed in the left temporal lobe (Fig. 5C,D) can also be explained by the highest GM/WM ratio in this region. Higher temperatures in subcortical regions were observed compared to cortical regions for both modelpredictions and MR measurements, consistent with prior reports 13,46,51,76 . Cortical regions are related to highlevel function such as decision-making or sensing the surrounding environment and have more dense arterial and venous structure, resulting in higher CBF with more cooling. Increased conduction near the skull due to www.nature.com/scientificreports/ lower ambient temperature outside the head also results in lower temperatures in superficial cerebral sites and relatively higher deep brain temperatures 77 . MR-measured temperatures were also higher in the left hemisphere compared to the right hemisphere by 0.03 °C, consistent with prior reports observing 0.03 °C higher temperatures in the left hemisphere 51 . For model-predictions, average internal jugular vein temperature was 37.0 °C across all subjects, 0.2 °C higher than the input arterial temperature (36.8 °C) in our model. Previous literature reported a ~ 0.3 °C temperature difference between arterial and venous temperatures, similar to our findings 5,78 . As heat dissipation in the brain is largely due to conduction and heat transfer from tissue to cooler incoming blood, this gradient in vessel temperatures is expected particularly in the healthy brain. Temperature differences between model-predictions and MR measurements were primarily attributed to challenges in MR thermometry acquisition in some regions (e.g., susceptibility near the sinus cavity, suppression band placement, etc.) and the narrow range of model-predicted temperatures compared to MR measurements. Susceptibility artifacts can quickly deteriorate spectral quality and reduce the accuracy of chemical shift MR thermometry 51,79,80 . MR-measured temperatures at the brain periphery in some regions are relatively higher than the corresponding model-predicted temperatures, attributed in part to susceptibility artifacts in experimental thermometry at the tissue boundaries. While manual shimming and adjustment of suppression bands were both applied in this study to minimize susceptibility, improvements in MR spectroscopy acquisition may alleviate some of these challenges. The suppression band resulted in partial signal loss in the frontal lobe as described in previous work 47,51 , but reduces further spectral contamination from susceptibility artifacts or field inhomogeneity (particularly for echo planar acquisition) from the sinuses or other cavities. We acknowledge the voxel size used in EPSI acquisition may lead to some partial volume effects in smaller regions; however, atlas-defined regions with dimensions smaller than the voxel size were excluded to minimize this as much as possible. While reliability of the 3D EPSI sequence used for whole brain MR thermometry was not specifically investigated in the current work, many prior studies have evaluated its reproducibility both for metabolite quantification 81,82 and chemical shift thermometry 51,83,84 . Maudsley et al. used Monte-Carlo simulations to determine SD of the frequency measurement with EPSI is 0.0018 ppm (corresponding to ~ 0.2 °C) with a maximum error of 0.006 ppm (corresponding to ~ 0.6 °C) 51 . Zhang et al. assessed the SD of regional temperature in healthy volunteers as a metric of repeated measurement accuracy, reporting a mean (range) of 0.4 (0.3-0.8) °C across the entire brain. Intrasubject coefficient of variation (CV) was 0.9% for brain temperature measurement using creatine as a temperature reference using the 3D EPSI sequence 83 www.nature.com/scientificreports/ data support the use of EPSI as an experimental thermometry method suitable for comparison with biophysical model predictions. Finally, subject-specific brain tissue and vessel structure was used in our model; however, the same inlet arterial temperature was used for all subjects which may be partly responsible for the relatively small variation (SD = 0.01 °C) across subjects compared to MR measurements (SD = 0.09 °C). Future work will address several limitations. While body temperature (e.g., axillary) is not an ideal surrogate for arterial temperature, the use of body temperature for each subject in our model as a boundary condition may be necessary to account for individual variations, particularly as brain and body temperature are correlated in healthy mammals [13][14][15][16] . MRA and MRV, while relatively fast, are not optimal methods for constructing vessel structure and calculating CBF. Refinement of vessel distribution with more accurate measurements such as computed tomography angiography or acquisition at higher field strengths (e.g., 7 T) may be necessary. Similarly, direct measurements of blood flow and perfusion using, e.g., arterial spin labeling, may further improve our model. The biophysical model could also be further improved by incorporation of momentum conservation for blood flow in all simulation domains, as well as through better assessment of the input parameters and boundary conditions that are used for predicting the brain thermal behavior of a given individual. In the case of rigorous momentum conservation in the model, some thermophysical parameters which determine the thermal resistance of blood vessels and tissue may impact thermal distribution. Robust sensitivity analysis to input parameters is an immediate next step. While experimental thermometry acquired with the whole brain EPSI sequence were consistent with previous reports 51 , artifacts near the sinuses or other cavities, field inhomogeneity from the echo planar acquisition, and long scan times are limitations of this method. The experimental temperature maps were resampled and transformed to T1-weighted image space to facilitate comparisons between MR measurements and model-predictions, and some uncertainty may be introduced at the voxel-level. Finally, as the model was evaluated with data from healthy volunteers, future studies will investigate the agreement between model-predictions and MR measurements using patient data (e.g., chronic cerebrovascular disease, cardiac arrest, severe traumatic brain injury, etc.) and determine the required accuracy and resolution for the use of brain temperature as a non-invasive marker for diagnosis or treatment planning. Conclusions Personalized model-predicted brain temperature maps for 30 healthy subjects were compared with whole brain MR thermometry, and agreement in both voxel-wise and regional temperature distributions were observed. While differences exist, we expect the combined and complementary use of model-predictions and MR measurements is an emerging paradigm for further development of brain temperature as a promising biomarker for prognostication and treatment monitoring. Data availability Raw data will be made available upon request to the corresponding author and after a resource sharing agreement is in place, as required by the authors' institutions.
2022-11-11T16:34:50.608Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "5fdaa55bc623cb8f5338564838a5617a534d4971", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "5fdaa55bc623cb8f5338564838a5617a534d4971", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234812726
pes2o/s2orc
v3-fos-license
Effects of parity, frustration, and stochastic fluctuations on integrated information of consciousness for networks with two small-sized loops Background: Integrated Information Theory (IIT) has been attracting attention as a theory of consciousness. The latest version, IIT3.0, is still at the stage of accumulating knowledge concerning fundamental networks. This paper presents an evaluation of the system-level integrated conceptual information of a major complex, Φ Max , associated with the center of consciousness for a small-scale network containing two small loops in accordance with the IIT3.0 framework. We focus on the following parameters characterizing the system model: 1) number of nodes in the loop, 2) frustration of the loop, and 3) temperature controlling the stochastic fluctuation of the state transition. Specifically, assuming that the two loops are coupled systems, such as cerebral hemispheres, the effect of these parameters on the values of Φ Max and conditions for major complexes formed by a single loop, rather than the entire network, is investigated. Results: Our first finding is that parity of the number of nodes forming a loop has a strong effect on the integrated conceptual information Φ Max . For loops with an even number of nodes, the number of concepts tends to decrease, and Φ Max becomes smaller. When the loop is formed with an odd number of nodes, the system without frustration and the system with two frustrated loops can have exactly the same Φ Max . It is also shown that, although counterintuitive, the value of Φ Max can be maximized in the presence of stochastic fluctuations. Our second finding is that a major complex is more likely to be formed by a small number of nodes under small stochastic fluctuations. In particular, this tendency is enhanced for larger numbers of nodes constituting a loop. On the other hand, the entire network can easily become a major complex under larger stochastic fluctuations, and this tendency can be reinforced by frustration. Conclusions: Our results indicating that the entire network dominates and maintains a high level of consciousness in the presence of a certain degree of fluctuation and frustration may qualitatively correspond to actual neural behaviors. The results of this study are expected to contribute to the verification of the consistency of IIT with the actual nervous system in the future. Background Integrated Information Theory (IIT), proposed by Giulio Tononi, has been attracting attention as a theory that can mathematically describe the quality and quantity of consciousness generated in causal systems such as neural networks. Essentially, in IIT, the amount of information obtained by integrating the components of a system into one, rather than dividing them into several parts, is regarded as the quantity of subjective consciousness emerging in the system. Since IIT was first proposed [1], the computational framework has been continually improved [2,3], and the latest version is IIT3.0 [4,5]. IIT has not been constructed in a bottom-up manner to be consistent with neurophysiological experiments but has been proposed through careful observation of phenomena related to subjective consciousness to represent necessary properties of consciousness in the framework of information theory. A brief review of IIT3.0 [4] is provided in the Appendix to facilitate the understanding of the present work. In IIT research, the integrated conceptual information, Φ, and that of complexes represented as Φ Max , in system-level integration are frequently referred to simply as integrated information. However, to avoid confusion, in this paper, the term "integrated information" is used only for mechanism-level integrated information ϕ or ϕ Max , and the term "minimum information partition (MIP)" is also used for mechanism-level integration while the expression "integrated conceptual information" is always used as a reference to Φ or Φ Max . Furthermore, we do not deal with the formation of minor complexes, and the notation Φ Max is utilized only to express integrated conceptual information of a major complex. Following the release of newer versions of IIT, improvements were reported from various points of view. The metric of integrated information used in IIT2.0 [6] can have a negative value, which is a disadvantage. Thus, several improved metrics have been proposed [7,8,9]. Oizumi et al. [10] also interpreted IIT from the perspective of information geometry and proposed another metric of integrated information. Mediano et al. [11] applied these metrics to some common networks and compared their usefulness in a unified manner. Another major problem in the calculation of integrated information is the extremely large computational power required. This is an obstacle in verifying the validity of IIT in large-scale systems such as the human brain. For example, the number of node partitioning patterns increases exponentially with system size. Some approximate computational methods have been proposed to quickly find the optimal partitioning pattern [12,13], which can be applied to the framework of IIT2.0, but not IIT3.0, owing to its complicated hierarchical structure. In another direction including IIT3.0, an attempt to evaluate the quantity of integrated information in the limit of an infinite number of nodes was reported using a technique called mean field approximation in statistical physics [14,15]. Krohn and Ostwald [16] expressed the integrated conceptual information Φ of the IIT3.0 framework using probabilistic models. In one application of IIT, an attempt to determine the integrated information from EEG data was reported [17]. Recently, IIT research has grown beyond the original purpose of evaluating the quality and quantity of consciousness. Niizato et al. [18] utilized integrated conceptual information to characterize the behavior of schools of fish. IIT also deals with the quality of consciousness (qualia) [19]. IIT3.0 [4] assumes that a constellation of concepts associated with the value of mechanism-level integrated information ϕ Max and the core cause and core effect (past and future probability distributions), correspond to the quality of consciousness. The validity of this assumption needs to be confirmed in the future, and it has recently been suggested that category theory, which mathematically deals with relationships between multiple mathematical structures, can be a powerful tool for this purpose [20,21]. IIT has influenced other fields and theories as well. IIT claims that consciousness emerges as intrinsic information independent of physical entities, however, Barrett [22] discussed the hypothesis of how consciousness can be related to the fundamental fields of physics. Safron [23] recently proposed a theory that unifies IIT with two other prominent theories of consciousness, the free energy principle [24] and the global neuronal workspace theory [25]. On the other hand, despite the prominence of IIT research, problems related to the formulation of IIT and problems from a philosophical perspective have been pointed out, partly because of the lack of evidence supported by experiments [26,27,28]. For IIT to develop in the future, it will be important to verify its consistency with clinical findings. As mentioned above, many metrics and approximation methods have been proposed for IIT2.0, and much knowledge has been accumulated so far. The most significant development from IIT2.0 to IIT3.0 is the introduction of system-level integrated information. As a result, IIT3.0 has become more complicated with hierarchical computational procedures and increased computational complexity, and no practical approximation methods have been proposed. Analyses for several networks were described in the original article of IIT3.0 [4], however, since then, only a few papers have discussed the value of Φ for specific networks. Popiel et al. [29] examined the trend of Φ for a traditional fully connected Ising network of five nodes. In their research, a parameter of temperature was introduced for controlling stochastic fluctuations in the state transitions, and the variation of Φ with respect to the temperature was investigated in the context of phase transitions for the magnetization and magnetic susceptibility of the statistical mechanics. Furthermore, few insights into how major complexes are generated have yet been described. We believe that IIT3.0 is still at the stage of accumulating knowledge concerning fundamental networks. In this study, we evaluate the integrated conceptual information of a major complex, Φ Max , in networks significant for validating IIT. We consider the well-known clinical finding that patients with a split brain appear to have independent consciousness in their left and right hemispheres [30,31], and then consider small networks in which two loops corresponding to both hemispheres are connected by a bridge. To enhance the diversity of the system model, incorporating the concept of frustration for loops and stochastic fluctuations for state transitions, we examine multiple topologies with different numbers of nodes in the loops. In particular, we describe how often a small loop becomes a major complex, or conversely, how often the entire network becomes a major complex. Owing to the computational complexity, the maximum number of nodes is kept at eight in our experiments, and it is not possible to derive strict relationships between the split brain and IIT in terms of neurophysiology. Our goal is to provide, in the realm of IIT, rich insight into the factors that make a part, rather than the whole, of the network a major complex. Methods In this study, we examine the integrated conceptual information of a major complex, Φ Max , and the corresponding major complex constitution for networks with the five topologies shown in Fig. 1. In the network shown in Fig. 1(a), the loop consisting of nodes 1, 2, and 3 and the loop consisting of nodes 4, 5, and 6 are connected by the edge between nodes 3 and 4. Similarly, in networks (b) and (c), two loops consisting of three or four nodes are connected by an edge. Such an edge, when removed, divides the entire network into two subsystems and is termed a bridge. Bridges are important paths in network efficiency because they are frequently passed through during traffic between two arbitrary nodes. From this perspective, although the number of nodes is extremely small compared with a human brain, the two loops correspond to the two brain hemispheres and the bridge to the corpus callosum. In addition, the networks in Fig. 1(b) are expected to have a stronger tendency to form a major complex in only one loop than in the network (a) because more nodes are included in each loop, and intuitively, the loop is likely to be isolated. For comparison, we also examine the topology shown in Fig. 1(d) and (e). In Fig. 1(d), adding a single edge to the network in Fig. 1(a) makes the edge between the loops non-bridging, and we expect the entire network to have a strong tendency to form a major complex. In the fully connected network shown in Fig. 1(e), where edges are set between all nodes, it appears that the entire network is likely to be a major complex. To calculate the integrated information according to the IIT framework, we first need to define the transition probability matrix (TPM) of the target network. In this study, the state of node i(= 1, 2, . . . , N ) is represented as S i (= ±1), corresponding to firing and non-firing, although the index representing time is not explicit. The input to node i at a given time is represented as where J ij is assumed to be zero if there is no edge between nodes i and j, and 1 or −1 if there is an edge between nodes. Also, for self-couplings, J ii = 0 is assumed. J ij = 1 means that nodes i and j tend to assume the same state, and J ij = −1 means that nodes i and j tend to assume different states. Using σ i , the probability p(S i ) that node i will assume S i (= ±1) at the next time step is represented as where T is a parameter called temperature which controls the accuracy of neuronal behavior, as introduced in the traditional Hopfield model and the above-mentioned work [29]. In this study, we also examine the dependence of Φ Max on temperature T . All 2 N network configurations are subject to our investigation; however, states which give completely equivalent computational processes of the integrated conceptual information are excluded as duplicates. For example, in the network of Fig. 1(a) with all edges J ij = 1, the configurations S 1 = −1, S i = 1 (i = 2, 3, 4, 5, 6), and another configuration S 2 = −1, S i = 1 (i = 1, 3, 4, 5, 6) are completely equivalent from the perspective of computing the integrated conceptual information. In such a case, only one of the multiple equivalent states should be considered. Stable states satisfy the condition of J ij S i S j = 1 for any edge and its related nodes, and the existence of such a stable state in a loop can be determined by whether the product of all edges J ij included in the loop becomes 1. If the product is −1, there is no stable state for the loop. Within the field of magnetism in statistical physics, this kind of loop characterized by a negative product is referred to as "frustration" In this study, we also investigate the effect of the existence of frustrated loops on the value of Φ Max because actual neural circuits are expected to be frustrated with a mixture of inhibitory and excitatory connections. The amount of mechanism-level integrated information ϕ Max tends to be higher when past and future states are definitively determined based on the current state. Therefore, we presume that integrated conceptual information Φ Max is also smaller in networks with frustrated loops. Let us define the specific edge settings in our experiments. As a network with no frustration, all edges are set to J ij = 1. As a network with frustration in one loop, we set J 13 = −1 in the topologies of Fig. 1(a) and (c), J 24 = −1 in (b), J 12 = −1 in (d) and (e). The other edges are assumed to be 1. For a network with frustration in both loops, we also set the edge located in the symmetric position of the right loop to −1. However, for network (e), we set J 45 = −1. Note that in network (d), the loop formed by nodes 2, 3, 5, and 4 is not frustrated in any case. As described above, for the five topologies shown in Fig. 1, we consider the following cases: a) no frustration, b) frustration in one loop, and c) frustration in both loops, resulting in a total of 15 networks. For each network, we find the major complex and obtain the integrated conceptual information Φ Max for all states except for degenerations which are equivalent from a computational point of view, while changing the temperature parameter T . For our experiments, we use the Python library called PyPhi, published by Mayner et al. [32] for computing integrated conceptual information. Results In this section, we present the results of our simulations. The changes in Φ Max with respect to T are shown, followed by the indices related to the formation of the major complex. A deeper discussion of results is presented in the next section. Integrated conceptual information Φ Max with respect to temperature T Fig. 2 shows the value of the integrated conceptual information of a major complex Φ Max versus the temperature parameter T for the network in Fig. 1(a). For each T in the network without frustration, a particularly large value of Φ Max is observed when the states of all nodes are equal, that is, S i = 1 or S i = −1, which corresponds to stable states in the limit of T → 0. A series of changes in Φ Max for other configurations is relatively small and tends to have a maximum value in the range of T = 1 to T = 2, instead of monotonically decreasing with T . Because the mechanism-level integrated information ϕ can be qualitatively regarded as the extent to which the current state can define the past and future states, it might be intuitively inferred that the value of ϕ is larger when T → 0, which has no probabilistic fluctuations. However, the computation of the system-level integrated conceptual information Φ is too complicated to be understood intuitively because it involves the hierarchical computation of unidirectionally partitioning the subsystem, calculating the mechanism-level integrated information, and measuring the distance between conceptual structures. It is not incomprehensible that Φ Max may be larger in the presence of stochastic fluctuations, which is further investigated in the Discussion section. On the other hand, when the value of T is high, the value of Φ Max approaches zero regardless of node configuration. For T → ∞, the transition probability between any two configurations approaches 1/2 N . This also holds for the partitioned system, and it is obvious that there is no difference between the two systems before and after partitioning in the calculation process of Φ, resulting in Φ → 0. When a single loop is frustrated, no such prominently large Φ Max is observed, unlike in the network where no frustration is present. One of the configurations with a moderately large Φ Max value is {S i } = {1, −1, 1, 1, 1, 1}. Considering J 13 = −1, we see that the state at the next time step for this configuration is uniquely determined in the limit of T → 0. We believe that this causal uniqueness leads to large values of Φ Max . As in the case of the network without frustration, in some series the maximum value of Φ Max is observed within the range of T = 1 to T = 2. A series of changes in Φ Max for the network with frustration in the two loops is identical to that of the network without frustration, indicating that both networks have causally equivalent configurations. We examine this in more detail in the Discussion section. It should be noted that the current IIT version cannot handle the time evolution of states. Even if two distinct networks have the same value of Φ Max at any instant, either one may transition to another state with a different value of Φ Max at the next time point or may move between multiple states while maintaining a constant value of Φ Max . As an example of one of the configurations that exhibits a pronouncedly large Φ Max , let us consider {S i } = {1, −1, 1, −1, −1, 1} in a network with two frustrated loops. Considering J 13 = J 45 = −1 in our setting, we can see that this configuration transitions at the next time point with probability 1 to a flipped version of the current states in T → 0. Thus, the system maintains a high value of Φ Max switching between two mirror states. This is quite different from the situation in which a network without frustration maintains only one stable state of S i = 1, as described above. The integrated conceptual information Φ Max for the networks in Fig. 1 is shown in the Additional file 1. For networks (c, d), the tendency is qualitatively similar to the case of network (a). Namely, the following are observed: 1) In the network without frustration, Φ Max is remarkably high in the stable state of T → 0, that is, all the nodes are 1 or −1, and the integrated conceptual information is relatively low in other states while reaching a maximum value in the range of T = 1 to T = 2. 2) In networks with a single frustrated loop, no state with a remarkably high value of Φ Max is observed. 3) The graph illustrating the change in Φ Max in networks with two frustrated loops is identical with that of the network without frustration. On the other hand, networks (b, e) show a different trend from networks (a, c, d). Specifically, we find that 1) Φ Max has a maximum value near T = 0 and overall decreases as T increases; 2) the stable state in T → 0 without frustration has a relatively large Φ Max , but the degree of protrusion is limited; and 3) a series of changes in Φ Max for the network with two frustrated loops is not identical with that of the network without frustration. Moreover, considering all results, we can see that the scale of the Φ Max differs greatly depending on network topology. In comparing network (c) with network (a), it has only one more node added, but Φ Max is roughly doubled. On the other hand, although network (b) has the largest number of nodes, Φ Max is at most approximately 1.6, which is notably lower than the other networks. The number of nodes constituting a loop in this network is even (four), unlike networks (a, c, d), and the parity of the number of nodes has a significant influence on the number of concepts in conceptual structure, directly affecting the value of Φ Max . We explore this in more depth in the Discussion section. Formation of major complexes We examine the effect of network topology on the tendency of a subsystem to become a major complex. Two kinds of ratios of the following events among the 2 N configurations are investigated for each network: Ratio1 Major complex is formed by less than or equal to half of the total number of nodes N , Ratio2 A major complex is formed by all nodes. However, in practice, configurations that are essentially equivalent, such as mirror states, are excluded instead of duplicates being counted, thus reducing the total number of configurations to less than 2 N . Two types of statistics are obtained for each ratio by setting different ranges of target T . First, when calculating these indices, any configuration that satisfies each event at any one of the simulation temperatures T is counted. Second, in order to clarify the dependence of the ratios on the parameter T , only cases satisfying the requirement at any point among T ≥ 1.5 are counted for ratio1, and only cases satisfying the requirement at any point among T ≤ 0.1 are counted for ratio2. Ratio1, the rate of major complexes formed by less than or equal to half of the total number of nodes N , is shown in Fig. 3. The horizontal axis represents the network topology with the number of frustrated loops, and the vertical axis represents the ratio. The bar graph shows statistics for the entire temperature range, and the line graph shows statistics for T ≥ 1.5. As expected, this ratio is larger for networks containing bridges in Fig. 1(a, b, c) than for networks without bridges in Fig. 1(d, e). The formation of major complexes with such a small number of nodes is frequently observed when values of T are relatively small. As shown by the green lines, the ratio significantly decreases in the presence of relatively large stochastic fluctuation, T ≥ 1.5, and in particular, becomes zero for networks (d, e). In the next section, we discuss the reason why major complexes with a small number of nodes are likely to occur under low stochastic fluctuation. Furthermore, this ratio is especially large for network (b), which has more nodes in the loops on both sides of the bridge. However, in network (b), it has been observed that nodes {1, 2, 3, 4} or {5, 6, 7, 8} rarely form a major complex even for low T , and a major complex is often composed of three or fewer nodes. Ratio2, the rate of the major complexes formed by all nodes, is shown in Fig. 4. The bar graph depicts statistics for the entire temperature range, and the line graph depicts statistics for T ≤ 0.1. For the entire range of T , this index is almost 1 except for network (b), which also implies that it is common for the entire network to be integrated into a major complex. However, this ratio significantly decreases in the absence of stochastic fluctuation, T ≤ 0.1, for networks (a, b, c) with bridges. On the other hand, no significant temperature dependence is observed in networks (d, e) without a bridge. Another feature of this index is that the effect of frustration is significant in network (b). In this network, the presence of frustration reduces the number of completely random nodes with p(S i = ±1) = 0.5, encouraging the entire network to become a major complex, and conversely, preventing a small number of nodes from becoming a major complex. The role of frustration is explained in detail in the Discussion section. These results indicate that in a system with a bridge, each loop tends to dominantly determine the integrated conceptual information, especially when the stochastic fluctuations of state transitions are small, while the entire network tends to become a major complex when the stochastic fluctuations are large or when the network is densely connected. Discussion Congruence of Φ Max in cases of no frustration and two frustrations We consider what causes the congruence of Φ Max in the case of no frustration and of two frustrations for the network in Fig. 1(a, c, d). Fig. 5(a) shows two different networks, where the white and black circled nodes represent S i = +1 and S i = −1, respectively, the solid and dashed edges represent J ij = +1 and J ij = −1, respectively. The left network is free of frustration, and the right network has frustration in each loop. The numbers beside each node are values of σ i , which are derived by Eq. 1. In these networks, the absolute value of σ i for each node is equal. For nodes with different signs of σ i between the two networks, the values of p(S i = +1) and p(S i = −1) are also flipped. However, this probability flip has no effect on the distance between the non-partitioned and partitioned repertoires measured by the earth mover's distance (EMD), because the ground metric of EMD both for calculating ϕ and Φ is defined by the Hamming distance, that is, the discrepancy of each node state. It can be said that 1/ − 1 representing the state of each node is simply a label. Thus, these two networks form an equivalence causality between two adjacent times, resulting in equal values of Φ Max . Fig. 5(b) depicts another example, in which interchanging the positions of the two rightmost nodes (node 5 and node 6 in Fig. 1(a)) does not affect causality. When a loop is formed with three nodes, it is always possible to set the configuration such that the value of Φ Max is equal for both networks with and without frustration in the two loops. On the other hand, such a counterpart cannot exist in a network with only one frustrated loop. Moreover, when the loop is formed by four nodes, it is not always possible to determine configurations that give values of Φ Max common to systems with different numbers of frustrations. In IIT3.0 [4,5], the constellation of concepts corresponds to the qualia which the network experiences as subjective consciousness. Although the two networks mentioned above have an equal value of Φ Max , their constellations are different because the sign flip of σ i alters the constellation, but not EMD. Furthermore, as mentioned in the Results section, considering time evolution, the two networks do not necessarily maintain common values of Φ Max as they transition from the current states. Therefore, these two networks can have the same value of Φ Max simultaneously, but the qualia occurring in them are different. However, in the exceptional case where the sign flip is present in all nodes, IIT considers the same qualia occur in these two networks and designates such constellations isomorphism [19]. The quest for conditions that allow two different networks to maintain equal values of Φ Max during time evolution, is a future challenge. Maximum of Φ Max in the range of T = 1 to T = 2 We examine why Φ Max has a maximum in the range of T = 1 to T = 2, depending on the network topology and node configuration. The value of ϕ Max associated with each concept comprising the conceptual structure can be interpreted as the degree to which the current state defines past or future states. Temperature T is a parameter that determines the accuracy of neuronal response, and a larger T makes the neuron's behavior more stochastic. Thus, we might intuitively expect that as T increases, the value of ϕ Max will decrease, and consequently, Φ Max will also decrease. Indeed, it is obvious that in the limit of T → ∞, Φ Max → 0, but the experimental results suggest that Φ Max does not necessarily show a monotonic decrease. This non-monotonicity for Φ Max is explained using a simple network of four nodes, as shown in Fig. 6(a). Let us assume that states of all nodes are identical, that is, S i = 1 or S i = −1. The dependence of Φ Max on T for this network is shown by the blue line in Fig. 6(b), which also shows a maximum at T = 1.5. For T ≥ 0.4, a major complex is formed by all nodes, and the conceptual structure consists of 14 concepts as only the combination of nodes 3 and 4 in the power set is not a concept. Let us take as an example a mechanism consisting of node 1 and node 4 with all nodes being a subsystem, for which the optimal purview is a combination of node 2 and node 3 at any temperature. Representing the mechanism-level integrated information obtained for the optimal past and future purviews as ϕ Max cause and ϕ Max effect , respectively (see the Appendix), in this case, the relation ϕ Max cause < ϕ Max effect holds, and ϕ Max is determined by ϕ Max cause indicated by the orange line in Fig. 6(b). Fig. 6(c) shows the change in the probability of each bin of the cause repertoire with respect to T . Note that only the probabilities for the four configurations determined by the states of nodes 2 and 3 in the purview are shown. The solid lines show the values of the core cause, that is, the cause repertoire for the non-partitioned purview. The dashed lines show the values for the purview partitioned by MIP. Within the limit of T → ∞, the probability of each bin in both non-partitioned and partitioned repertoires approaches the value of a uniform distribution (in this case, 1/4). However, the speed of change as T rises is not constant, and it is probable that the magnitudes of probabilities for the non-partitioned repertoire and the partitioned repertoire are reversed somewhere in the middle, as in the bin of {11}. Additionally, as shown in the bin of {01} in the partitioned repertoire, the change may not be monotonous and rather a mixture of decrease and increase. These can be seen in the repertoire for three reasons. First, the value of σ i (Eq. 1) is different depending on the node, and consequently, the derivative of Eq. 2 for a specific value of T differs for each node. Second, the operation of partitioning is computationally realized by the marginalization of TPM, which also makes the differential coefficients inconsistent between the non-partitioned and partitioned mechanism/purview. Third, especially for the cause repertoire, the value of each bin indirectly affects the others because normalization is performed during the calculation based on the Bayes' formula. ϕ is obtained by EMD between non-partitioned and partitioned repertoires, and due to factors mentioned above, even ϕ Max can exhibit non-monotonicity with respect to T . It is natural that Φ Max , which is calculated using the set of ϕ Max , can show non-monotonicity. In the network of this study, σ i is likely to take on a value of one or two, and the probability defined in Eq. 2 has a large slope around T = 1.0. Therefore, it is speculated that the above-mentioned effects are most apparent at these temperatures. It is believed that stochastic uncertainties, such as thermal and spontaneous fluctuations, exist in actual neuronal behaviors. These neuronal fluctuations are expected to be well modeled by Eq. 2 and the value of T giving the maximum value of Φ Max , rather than larger values of T which make the behavior of the neuron too imprecise. Further work is needed to link the results of the present study to real neuronal activity and the amount of consciousness. Relationship between network topologies and magnitudes of Φ Max We investigated the value of Φ Max versus T for the five types of network topologies shown in Fig. 1. From Fig. 2 and the Additional file 1, it can be seen that the scale of the value of Φ Max differs greatly depending on network topology. We discuss the causes of these differences. Fig. 7 shows the number of concepts included in the major complex for the five network topologies investigated in this study, under conditions in which loops are not frustrated and states of all nodes are identical. In general, the larger the number of concepts in a conceptual structure, the larger the Φ Max , as was also mentioned in [4]. The figure includes the number of concepts in a non-partitioned system (solid line) and that in a system partitioned by the optimal unidirectional cut (dashed line). The value of Φ Max is significantly affected by the difference in the number of concepts between the non-partitioned and partitioned systems, and it can be confirmed that these differences roughly correspond to the order of magnitude of Φ Max for the five topologies. Even though networks (a, b, c) in Fig. 1 have similar topologies with two loops connected by a bridge, a significant difference in the number of concepts occurs. We believe that this is due to the parity of the number of nodes comprising the loop. Here, we consider a network made up of a single loop, as shown in Fig. 8(a). The state of each node is probabilistically determined by the state of the nodes connected by edges and the type of edge J ij , using Eqs. (1) and (2). Therefore, the state S i at time t can affect the state of only its neighboring nodes connected by the edge at time t + 1, and not the states of itself nor of the non-adjacent nodes. Similarly, the state S i at time t is affected only by the state at time t − 1 of its neighboring nodes. Thus, in the single-loop network shown in Fig. 8(a), two neighboring nodes cannot be a concept, except when the number of nodes constituting a loop, M , is three. For example, in the loop of M = 4, when nodes 1 and 2 constitute a mechanism with all nodes a subsystem, partitioning any purview into a group of nodes that are causally related to node 1 (i.e., nodes 2 and 4) and a group of nodes causally related to node 2 (i.e., nodes 1 and 3) will result in ϕ Max = 0 ( Fig. 8(b)). Note that a similar argument does not hold for the case of M = 3. Extending this idea, we see that in a loop with M of even numbers, only a combination of every other node can be a concept. For example, in a loop of M = 6, nodes {1, 3, 5} and nodes {2, 4, 6}, etc., can be a concept (ϕ Max > 0), while nodes {1, 3, 4, 5} lead to ϕ Max = 0 by breaking any purview into a group that has causal relations with nodes {1, 3, 5} (i.e., nodes {2, 4, 6}) and a group that has causal relations with nodes {2, 4, 6} (i.e., nodes {1, 3, 5}), as in the case of Fig. 8(b). It is also clear that a concept consisting of the entire network can only be generated when M is an odd number. From the above discussion, it can be expected that the number of concepts is significantly reduced for loops of even M . For the four networks shown in Fig. 8(a), the value of Φ Max and the number of concepts in non-partitioned and unidirectionally partitioned systems are shown in Table 1 under the conditions of T = 1.0 and identical node states. It can be seen that the value of Φ Max and the number of concepts repeatedly increase or decrease as M changes. Note that the value of Φ Max is extremely low for M = 4 because the same number of concepts are obtained in partitioned and non-partitioned systems, and that the value of Φ Max for M = 6 is also close to that of M = 3. Although our network of interest, shown in Fig. 1, does not consist of a single loop, we believe that the main cause of the extremely low Φ Max value of network (b) is the parity of nodes composing the loop. On the other hand, we speculate that the increase in the number of node combination patterns which can directly influence the concept generation contributes to the high Φ Max of network (c). A real neuronal network contains loops of various sizes. Since no analysis focusing on parity has been conducted thus far, new findings may be obtained by paying special attention to parity when examining IIT with actual brain data in the future. Generation of small-sized major complexes with small T Ratio1 is the fraction of major complexes formed by less than or equal to N/2 nodes and tends to be larger when parameter T is smaller. In this section, we investigate possible reasons for this trend. In particular, for T < 0.04, p(S i = 1) can be approximated as 1 or 0, which corresponds to the positive and negative of σ i . This means that for T ≈ 0, the state of each node at the next time step is determined by a majority vote of the current states of its neighboring nodes connected by edges. Additionally, in the case of σ i = 0, the state of the next time step for such a node is determined at random, because p(S i = ±1) = 0.5, regardless of T . These two effects of majority voting and complete randomness can reduce the number of concepts in the entire network, and consequently make the subsystem composed of a small number of nodes more likely to become a major complex, as explained below. Firstly, to illustrate the effect of majority voting, we consider the network shown in Fig. 9(a), in which impact of voting is pronounced. Representation for nodes and edges are the same as in Fig. 5. In this network, when all the nodes are set as a subsystem, no concept spanning nodes 3 and 4 can be generated in the case of T ≈ 0. This is because for any given purview, if a cut is set up to divide all nodes into two groups separated by an edge between nodes 3 and 4, while allowing one group to be an empty set, it is always possible to achieve ϕ Max = 0. Note that this can be done only if p(S i = ±1) is approximated as 1 or 0. As a result, only concepts consisting of combinations of elements from the node set {1, 2, 3} or the node set {4, 5, 6} can exist. Furthermore, for a partitioned subsystem with the unidirectional cut separating nodes {1, 2, 3} and {4, 5, 6}, the obtained concepts are exactly the same as for the non-partitioned system, considering this unidirectional cut has no effect on the majority decision in T ≈ 0. For a mechanism that includes node 3 or node 4, ϕ Max cause or ϕ Max effect is slightly different from the non-partitioned system, so the value of Φ Max does not vanish, although it is expected to be sufficiently small. In this network, the value of Φ Max is larger when, for example, a node set {4, 5, 6} is a subsystem and a unidirectional cut is placed in it, than when the entire network is a subsystem. On the other hand, as the value of T increases, the situation of strict majority voting terminates, and middle values, 0 < p(S i = 1) < 1, need to be handled. Unlike the situation with T ≈ 0, partitions can change the value of transition probability, and then concepts spanning nodes 3 and 4, such as a node set {1, 2, 4, 5}, can be generated. In fact, in the network shown in Fig. 9(a), the entire network has 29 concepts at T = 1.5, compared to 14 concepts at T = 0.03. As a result, a major complex is formed by all nodes in the region of large T . The number of concepts for networks other than the one discussed here are also likely reduced due to "insensitivity" to σ i caused by T ≈ 0, resulting in a tendency to generate small major complexes. Secondly, to illustrate the effect of complete randomness, we consider the network shown in Fig. 9(b). In this case, regardless of T , nodes 1 and 2 have p(S i = 1) = 0.5. Concepts involving nodes adjacent to such random nodes are less likely to occur. For example, consider a mechanism consisting of nodes 1 and 3, adjacent to node 2 of a random variable. In this case, to avoid trivial vanishing ϕ Max , node 2, which has a causal relationship with both nodes 1 and 3, must be included in the purview. However, by setting the partition to isolate only node 2, ϕ Max = 0 is eventually obtained for any purviews. This is because node 2 originally has p(S i = 1) = 0.5, thus, isolating it does not change its substantive causal relationship, resulting in at least ϕ Max effect = 0. A subsystem with many complete random nodes has a smaller number of concepts, and as a result, is less likely to become a major complex. Additionally, to the effect of majority voting, for the network in Fig. 9(b), when T is small, the subsystem consisting of nodes {4, 5, 6} is a major complex. As another example, for the network shown on the left of Fig. 9(c), where only one node has p(S i = 1) = 0.5, all nodes form the major complex even if T is small. On the other hand, the network on the right side of Fig. 9(c) has four nodes with p(S i = 1) = 0.5, and the size of the major complex remains small even if T is large. As described in the previous section, the value of Φ Max can be larger in the range of 1 ≤ T ≤ 2 than in T ≈ 0, and we optimistically expected that this temperature range with Eq. (2) was able to express the uncertainty of real neurons. In this section, we argued that within this temperature range, the entire network has a strong tendency to become a major complex. This may be consistent with the fact that our consciousness is usually an integration of information from the left and right hemispheres. Although these ideas are only self-serving conjectures from the results of the present study, they may also become important in the future in verifying the consistency of IIT with a real human brain. Relation between ratio1 and the size of loops As described in the previous subsection, more nodes with p(S i = ±1) = 0.5 tend to generate fewer concepts and reduce the size of the major complex. A necessary condition for complete randomness is that the number of edges coming into a node be an even number, and it can be said that σ i = 0 is most likely to occur when two nodes are connected with each other, which is satisfied by the loop in our study. Therefore, it can be inferred that by increasing the size of the loop, the major complex is more likely to be formed by a small number of nodes, rather than by all nodes. In particular, because the loop in the network shown in Fig. 1(b) consists of an even number (four) of nodes, the entire four nodes in the loop cannot become a concept, as mentioned earlier, further reducing the size of the major complex. For network (b), not only is the ratio1 larger than that of networks (a, c), but a huge proportion of the major complex is formed by only two nodes, such as nodes 2 and 4. The computation of Φ Max based on IIT3.0 requires operations of exponential order on N , and it is not practical to increase N unless an approximate method is used. However, it can be inferred that the value of ratio1 becomes large with increasing N , especially when loop size is even. Since the human cerebral hemisphere is not a single loop, it is necessary to analyze networks with complex structures, such as those with a mixture of multiple loop sizes. Effect of frustration on major complexes Finally, let us discuss the effect of frustration on the formation of major complexes. As illustrated in Fig. 4, the difference caused by the presence of frustration is pronounced in ratio2, the percentage of major complexes formed by the entire network, especially for the network in Fig. 1(b). The rate for network (b) is the lowest in the absence of frustration and increases with the number of frustrated loops. To investigate the rationale for this, we focus on two networks consisting of four nodes, as shown in Fig. 10. The representation for nodes and edges are the same as in Fig. 5. The network on the left side of Fig. 10(a) has no frustration, and all nodes have p(S i = ±1) = 0.5, where even a pair of nodes cannot become a concept, as shown in Fig. 9(b). As a result, the major complex is not formed by all nodes; in this case, two nodes, such as nodes 3 and 4, become a major complex, and its conceptual structure consists of only several single nodes. On the other hand, in the network of Fig. 10(b) with one edge inverted, the presence of frustration reduces the number of random nodes with p(S i = ±1) = 0.5 to only two, and node sets {1, 3} and {2, 4} can be a concept. As a result, all four nodes can form a major complex. However, it should be noted that a concept with all nodes cannot exist. For a loop consisting of four nodes, not all nodes can have p(S i = ±1) = 0.5 in the presence of frustration, regardless of the node configuration. Therefore, we can state that frustration plays a role in mitigating randomness characterized by p(S i = ±1) = 0.5, facilitating the entire network to become a major complex. Network (b) has two loops consisting of four nodes, and because of the effect described above, ratio1 decreases and ratio2 increases with the number of frustrations. However, the overall trend described in the earlier section, that two nodes often form a major complex when T is small, and that the entire network is likely to be a major complex, especially when T is large, is preserved. When three nodes comprise a loop, not all nodes can have p(S i = ±1) = 0.5, regardless of the presence of frustration. We speculate that this is a primal reason why no significant relationships between the number of frustrations and the two ratios were observed for networks (a, c). In the case of a loop consisting of five or more nodes, if not all, but four or five neighboring nodes have p(S i = ±1) = 0.5 in a cluster, the entire loop is not expected to be a major complex, although it may depend on the value of T . In the actual neural networks, a single neuron belongs to many loops. It is interesting to investigate relations between frustration and formation of major complexes in such complicated network structures. Conclusions In this study, we evaluated the integrated conceptual information Φ Max for small networks with two loops according to the IIT3.0 framework. The experimental results indicated that the parity of the number of nodes, the presence of frustration, and the stochastic fluctuation of state transitions strongly affected the magnitude of Φ Max and the formation of major complexes. As for the value of Φ Max , the following trends were often observed: 1) Φ Max did not monotonically decrease with respect to the magnitude of the fluctuation, but reached its maximum in the presence of some degree of stochastic fluctuation, and 2) even numbers of nodes in the loop reduced the number of concepts, resulting in a smaller value of Φ Max . As for the formation of major complexes, the following trends were often observed: 1) major complexes were easily formed by less than half of the nodes in networks with bridges, as compared to networks without bridges, especially in regions where stochastic fluctuations were small, and 2) frustration in some networks reduced the number of random nodes, facilitating the formation of major complexes in the entire network. Actual neuronal behavior fluctuates and frustrations exist in real neuronal circuits due to the mixture of excitatory and inhibitory connections. Under these circumstances, we usually maintain a high level of consciousness with integrated information from the left and right hemispheres. This fact corresponds to our results in 1 ≤ T ≤ 2 on an analogy level at this stage. It is expected that the results of this study will help to establish the consistency of IIT with the real brain and consciousness in the future. Mechanism-level integrated information Based on the current state of the nodes in the mechanism and the Bayes' formula, the probability distributions of the states for a specific set of nodes one step before and after, termed the cause and effect repertoires, respectively, are evaluated with nodes outside the subsystem fixed values. The set of nodes for which the probability distributions are computed is termed a purview and may differ from the mechanism, although it must be a subset of the subsystem. In addition, the purviews of the cause and effect repertoires are defined separately. The nodes in the mechanism (current) and the purview (past/future) are independently partitioned into two groups, assuming that one group of the mechanism is not causally related to one group of purviews, and the other group of the mechanism is not causally related to the other group of purviews. Realizing the assumption by marginalizing the TPM, the cause and effect repertoires are also calculated for this partitioned mechanism and purview. Then, the distance between the repertoires for non-partitioned and partitioned purviews is evaluated by the earth mover's distance (EMD) with the ground metric defined as the Hamming distance. The partition minimizing the EMD is termed the minimum information partition (MIP), and this minimum value is defined as the integrated information ϕ. The integrated information is evaluated for all possible purviews in the same way as described above. The maximal value among the past is termed the maximally irreducible cause ϕ Max cause and the maximal value among the future is termed the maximally irreducible effect ϕ Max effect . The cause and effect repertoires obtained for the optimal non-partitioned purviews are termed the core cause and core effect, respectively. The lower value between ϕ Max cause and ϕ Max effect is defined as the quantity of consciousness for the target mechanism ϕ Max , which is also often referred to simply as the integrated information. System-level integrated information Next, the quantity of consciousness in an upper level, that is, a subsystem and the entire network, is defined on the basis of the mechanism-level integrated information. For any possible mechanism, that is, any combination of nodes in the subsystem, the computation described in the previous section is performed. Then, a set of mechanisms satisfying ϕ Max > 0 is obtained. A mechanism with ϕ Max > 0 is termed a concept, and the set of concepts is called the conceptual structure. We consider a unidirectionally partitioned system in which the nodes in the subsystem are divided into two groups; however, the causality between the different groups is unidirectionally removed. For all patterns of unidirectional cuts, the conceptual structure is derived according to the previously described procedures. Then, the distance between the conceptual structures of the non-partitioned subsystem and each partitioned subsystem is calculated using an extended version of the EMD. The extended EMD is defined as the sum of the transportation costs of moving ϕ Max of each concept in the non-partitioned subsystem to the corresponding concept in the partitioned subsystem where the excess or unmatched portion within the value of ϕ Max , if any, is replaced by the transportation cost to the repertoire obtained under the unconstrained condition. The unidirectional cut that minimizes this distance is also called MIP, as in the case of the mechanism level, and the minimum value is termed the integrated conceptual information Φ, which is also often referred to simply as the integrated information. Finally, the procedure described thus far is performed for all possible subsystems in the network. The subsystem that gives the maximum value of the integrated conceptual information is termed a major complex. Then, amongst the subsystems that contain nodes not included in the major complex, the node which gives the maximum value of Φ is termed a minor complex. Subsequently, the next minor complex is obtained in the same way, focusing only on nodes that are not included in any of the previously selected complexes. In short, a complex can be interpreted as a subsystem whose Φ is a local maximum. The integrated conceptual information of a complex is denoted by Φ Max , and its conceptual structure is termed a maximally irreducible conceptual structure. IIT3.0 concludes that the value of Φ Max for complexes, especially for a major complex, is regarded as the quantity of consciousness in the network, analogous to the consciousness of the entire brain. Fig. 1(a). Only typical patterns are shown, and configurations showing the same or remarkably similar changes to one of the representatives are omitted. The results in no frustration and two frustrated loops are identical. Number of concepts constituting a major complex. This graph shows the number of concepts for a non-partitioned system (solid) and a system partitioned by minimum information unidirectional cut (dashed) when no frustration is present and all nodes have an identical state. Network topologies (a)-(e) correspond to those in Fig. 1. When the loop is formed by four nodes, there are fewer concepts and less differences between the two systems. No significant change in the number of concepts with respect to T is observed, except in the region of T close to zero. When the number of nodes in a loop is even, the number of combinations of nodes with ϕ Max > 0 is significantly reduced because the nodes can be always divided into two causal groups without overlap. (b) Nodes 2 and 4 have a causal relationship with node 1, but not with node 2. Therefore, the same repertoire is obtained in the partitioned system as in the non-partitioned system, resulting in ϕ Max = 0. This does not hold true when the number of nodes constituting a loop, M , is an odd number. (a) (b) (c) Figure 9 Effects of majority voting and complete randomness on the size of major complexes. For T ≈ 0, the major complex is more likely to be formed by a small set of nodes than for large T . (a) No concept crosses nodes 3 and 4 for T ≈ 0. A system with a unidirectional cut, in which only one direction of the edge between node 3 and node 4 is removed, has the same concepts as the non-partitioned system, resulting in low Φ Max . (b) Nodes 1 and 2 are completely random variables characterized by p(S i = ±1) = 0.5, independent of T . By cutting any purviews to isolate these nodes, ϕ Max effect = 0 can always be achieved. (c) The network on the left has only one node with p(S i = ±1) = 0.5, and the major complex is formed by all nodes even if T is small. On the other hand, the network on the right contains four nodes with p(S i = ±1) = 0.5, and the major complex is formed by a small number of nodes even when T is large. (a) (b) Figure 10 Major complex in the presence of frustration. (a) All nodes have p(S i = ±1) = 0.5, and none of the node pairs are concepts. When the entire loop is a subsystem, the conceptual structures of the non-partitioned and unidirectionally partitioned systems are so similar that Φ Max becomes small. As a result, the major complex is a node set {1, 2} or {3, 4}. (b) The addition of frustration eliminates the complete randomness of p(S i = ±1) = 0.5 at the two nodes, allowing the entire network to become a major complex.
2021-05-21T16:57:27.416Z
2021-04-09T00:00:00.000
{ "year": 2021, "sha1": "c61d53fdd180c6048207605792cbb798479e269a", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-385815/v1.pdf?c=1618004433000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "cbfd452bfac8504f7d2690c0245ea88d3f38fefe", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
5375996
pes2o/s2orc
v3-fos-license
Religious Factors and Hippocampal Atrophy in Late Life Despite a growing interest in the ways spiritual beliefs and practices are reflected in brain activity, there have been relatively few studies using neuroimaging data to assess potential relationships between religious factors and structural neuroanatomy. This study examined prospective relationships between religious factors and hippocampal volume change using high-resolution MRI data of a sample of 268 older adults. Religious factors assessed included life-changing religious experiences, spiritual practices, and religious group membership. Hippocampal volumes were analyzed using the GRID program, which is based on a manual point-counting method and allows for semi-automated determination of region of interest volumes. Significantly greater hippocampal atrophy was observed for participants reporting a life-changing religious experience. Significantly greater hippocampal atrophy was also observed from baseline to final assessment among born-again Protestants, Catholics, and those with no religious affiliation, compared with Protestants not identifying as born-again. These associations were not explained by psychosocial or demographic factors, or baseline cerebral volume. Hippocampal volume has been linked to clinical outcomes, such as depression, dementia, and Alzheimer's Disease. The findings of this study indicate that hippocampal atrophy in late life may be uniquely influenced by certain types of religious factors. Introduction Religion is considered an important part of life for many Americans, with 92% reporting a belief in God or a universal spirit, 83% belonging to a religious group, and 59% reporting that they pray at least daily [1]. Research on the neurological processes involved in spiritual beliefs and practices has been growing, but studies examining possible religious or spiritual correlates of structural neuroanatomy have been rare. Specific changes in brain function have been associated with practices including meditation [2,3,4,5,6], prayer [6,7], and a variety of religious and spiritual experiences [8,9,10,11]. Several brain regions, including the hippocampus [4], have also been implicated in religious experiences and practice [4,5,9,12,13,14,15,16]. A small number of studies have found that religious beliefs, practices, and experiences are correlated with the volume of specific brain regions, but the focus has been limited to hyper-religiosity in temporal lobe epilepsy patients [17,18] and beliefs about the nature of God [19]. The current study extends this research by examining relationships between a broad range of religious factors and hippocampal volumes, including religious group membership, religious practices, and life-changing religious experiences in a sample of older adults. The hippocampus has several important functions, including spatial, contextual, and episodic learning and memory [20,21,22,23,24,25,26,27]. The hippocampus may also influence the generation of attention and emotion through connections with the amygdala [28], and moderate cortical arousal and responsiveness through interconnections with the amygdala, hypothalamus, prefrontal cortex, and other areas [28]. Global cerebral atrophy occurs as a result of aging [29], but atrophy rates differ between brain regions [30,31]. Rates of atrophy for the hippocampus have been found to accelerate during late life [29]. Research indicates that hippocampal volumes may be affected by exposure to elevated glucocorticoids, particularly cortisol, a hormone released in response to stress [32,33,34,35,36,37], and that cumulative cortisol exposure may lead to hippocampal atrophy through various pathways [33,34,35,36]. This atrophy has been associated with mental health outcomes, including depression [38,39,40,41,42,43] and dementia [44,45,46,47,48,49] in later life. Studies have also identified the hippocampus as a brain region potentially involved in religious beliefs and spiritual practices. Initial findings indicate that the hippocampus is activated during meditation [4], and that larger hippocampal volumes are associated with long-term meditation practice [28,50]. Among certain epilepsy patients, smaller hippocampal volumes have also been associated with hyper-religiosity [18]. Building on evidence from research with meditation and temporal lobe epilepsy, within the context of hypothesized mechanisms of stress and glucocorticoids, this study focused on the potential role of religious factors in hippocampal atrophy. The objective of the present study was to delineate the pattern of prospective relationships between religious factors and hippocampal volume change in a large sample of older adults. Ethics statement The Psychiatry Institutional Review Board of Duke University Medical Center has approved this research. After complete description of the study to the subjects, informed written consent was obtained. All clinical investigation has been conducted according to the principles expressed in the Declaration of Helsinki. Participants Participants were 268 men and women aged 58 and over, recruited for the NeuroCognitive Outcomes of Depression in the Elderly (NCODE) study. Details of recruitment for this ongoing longitudinal study are described elsewhere [38]. Participants included two groups, those meeting DSM-IV [51] criteria for major depressive disorder and never-depressed comparison participants. Exclusion criteria included concurrent diagnosis of other psychiatric or neurological illness, significant cognitive impairment, and substance abuse. Requirements for inclusion in the non-depressed group were no evidence of a diagnosis of depression or self-report of neurological or depressive illness. Participants included in these analyses were enrolled between November 1994 and January 2005, and provided two or more sets of MRI measurements. MRI scans were acquired every two years, and religious, psychosocial, and demographic data were collected at baseline and annually, using a structured psychiatric interview. Length of time between baseline and final available MRI measurement ranged from 2-8 years (mean 4.19). Religion measures Religious factors assessed at baseline included (1) frequency of public worship, (2) frequency of private religious activity (prayer, meditation, or Bible study), (3) religious group membership. Religious factors assessed at baseline and annually included (4) born-again status and (5) life-changing religious experiences. Bornagain status was assessed with the question, ''Are you a born-again Christian?'' This was defined as: ''A conversion experience, i.e., a specific occasion when you dedicated your life to Jesus.'' Participants responding no were assessed for life-changing religious experiences with the question, ''Have you ever had any other religious experience that changed your life?'' Participants' responses changed over time; thus were categorized as: 1) no born again status or life-changing religious experience, 2) baseline bornagain status, 3) new born-again status (i.e., responded no to bornagain question at baseline, but yes at a later interview), 4) baseline life-changing religious experience, and 5) new life-changing religious experience. Religious group membership was classified as Catholic, Protestant, Other, or None. Because of the high degree of overlap between Protestant group membership and born-again status, the Protestant group was further divided into born-again and non born-again subcategories. Image acquisition and analysis All subjects were imaged with a 1.5-T, whole body MRI system (Signa; GE Medical Systems, Milwaukee, WI) using the standard head (volumetric) radiofrequency coil. Two sets of dual-echo, fast spin-echo acquisitions were obtained: one in the axial plane for morphometry of cerebrum and another in a coronal oblique plane for measurement of the hippocampus. Imaging acquisition parameters [52], volumetry of hippocampus and cerebrum [53], and the GRID software program used in analysis [54] have been described previously. Image analysis was performed at the Duke Neuropsychiatric Imaging Research Laboratory. Total cerebral volume was defined as white matter, gray matter, and cerebrospinal fluid in both cerebral hemispheres. Covariates Psychosocial and demographic covariates were included in these analyses, as well as baseline total cerebral volumes as a proxy for head size. Psychosocial factors assessed included stress (global selfreported stress experienced over the past 6 months), social support (a composite variable, primarily level of satisfaction with personal relationships [55,56]), and depression status (membership in depressed or non-depressed group). Demographic factors assessed included age, sex, self-reported race (dichotomized as white and non-white), years of education, and duration in the study. Data analysis Multiple linear regression analyses were conducted to assess relationships between religious variables and hippocampal volume change between baseline and final MRI measurement, controlling for psychosocial and demographic covariates, and baseline total cerebral volume. Left and right hippocampal volumes were calculated separately; volume change measures were computed by subtracting baseline region volume from final region volume. Results Descriptive statistics for the study sample are presented in Table 1 (N = 268), including demographics, religious factors, covariates, and brain volumes. Table 2 Discussion The findings of this study indicate that certain religious factors may influence longitudinal change in hippocampal volume during late life. Greater hippocampal atrophy over time was predicted by baseline identification as born-again Protestants, Catholics, or no religious affiliation, compared with Protestants who were not bornagain. Greater hippocampal atrophy was also predicted by reports at baseline of having had life-changing religious experiences. These longitudinal associations were not explained by baseline psychosocial or psychiatric factors (social support, stress, and depression status), demographic factors, duration in the study, or total baseline cerebral volume. Frequency of public and private religious activity did not predict changes in hippocampal volume. One way of interpreting these findings is within the context of the hypothesized impact of cumulative stress on the hippocampus. While some religious variables have been found to be associated with positive mental health [57,58,59], other religious factors may be a source of stress [19,60,61,62,63,64]. Research on biological pathways by which stress may influence hippocampal volumes has primarily explored neuronal death [32,65,66,67,68,69], decreased neurogenesis [70,71,72,73] and dendritic retraction [74,75]. The glucocorticoid vulnerability hypothesis proposes that chronic stress alters the hippocampus by elevating levels of glucocorticoids, which in turn extends the time period during which the hippocampus is susceptible to damage from various sources [37]. The measure of stress used in this study was not correlated with changes in hippocampal volume, possibly due to the fact that it captured acute rather than cumulative stressors. Research indicates that relationships between stress and hippocampal volume likely operate at the level of cumulative rather than acute stress, leaving the cumulative stress framework a plausible interpretation of these results. Greater hippocampal atrophy was observed longitudinally in this study among born-again Protestants, Catholics, and those reporting no religious affiliation, compared with non born-again Protestants. These findings may reflect potential cumulative stress associated with being a member of a religious minority. Though religious factors have been associated with positive mental health [59,76,77], studies have shown members of religious minority groups may also experience stressors related to these group affiliations [78,79,80]. Greater hippocampal atrophy was also found to be longitudinally associated with reported life-changing religious experiences. Spiritual experiences not easily interpreted within an existing cognitive framework or set of religious beliefs have been shown in previous research to be detrimental to subjective well-being [81]. Such experiences have the capacity to produce doubts regarding previously unquestioned convictions, potentially inducing cumulative stress even if the experience was subjectively positive. If the experience prompts a change in religious groups, existing social networks may also be disrupted. Thus, as possible sources of cumulative stress, both minority religious group membership and life-changing religious experiences may contribute to conditions that are deleterious for hippocampal volume. These findings can be interpreted within the framework of previous studies identifying the hippocampus as a brain region potentially involved in religious or spiritual beliefs and practices. Using PET and MRI data, studies of meditation indicate that the hippocampus has been found to be activated during meditative states, compared to control states [4,16]. Structurally, among meditation practitioners (compared to non-practitioner controls), significantly larger volumes [28,50] and higher gray matter concentrations [28] have been found in regions activated during meditation, including the right hippocampus. The current study did not find an association between change in hippocampal volume and frequency of spiritual activities, possibly reflecting the potential of varying spiritual practices to affect neuroanatomy differently. Research on temporal lobe epilepsy indicates that features of hyper-religiosity may be positively associated with hippocampal atrophy, but findings are mixed [17,18]. Associations found in the current study between life-changing religious experiences (but not frequency of religious practices) and hippocampal atrophy are consistent with a previous finding that the content and intensity of religious experiences (but not frequency of religious activities), differed between regular churchgoers and temporal-lobe epilepsy patients with hyper-religious features [82], symptoms linked to hippocampal atrophy in some studies [18]. The relatively large sample size, longitudinal design, and the assessment of a range of religious and psychosocial factors are strengths of this study. Limitations include the geographically and religiously constrained nature of the sample (largely Southeastern Protestant Christians), as well as the small sample size of participants reporting a life-changing religious experience. The image acquisition used in this study is also limited to the technology available when it began in 1994, which was retained throughout the study in order to have comparable scans for longitudinal analyses. Future research on qualitative aspects of lifechanging religious experiences could provide critical insight into the particular features of religion underlying the observed relationships with hippocampal volume. In addition, comprehensive cognitive testing in future studies could help determine the role of cognitive performance in both late life religious experiences and hippocampal volume. This study is among the first to examine religious and spiritual correlates of structural neuroanatomy, identifying several understudied factors associated with hippocampal atrophy. Religious factors, including religious group membership and life-changing religious experiences, but not frequency of public and private religious practices, were longitudinally associated with hippocampal atrophy. Atrophy in this region has important clinical implications, having been identified as a marker of late life mental health problems such as depression [38,39,40,41,42,43] and dementia [44,45,46,47,48,49]. These results may reflect an impact of cumulative stress on hippocampal volume. Mechanisms for these results, such as the elucidation of potential glucocorticoid stress pathways leading to atrophy, need to be more clearly identified, making the interpretation of these findings necessarily speculative. Future research exploring neuroanatomical changes in late life should not overlook the potential impact of religious factors, which remain relevant for a substantial proportion of the US population.
2014-10-01T00:00:00.000Z
2011-03-30T00:00:00.000
{ "year": 2011, "sha1": "6b9c20e5d46469499aedb811f21d3728d75abcf8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0017006&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b9c20e5d46469499aedb811f21d3728d75abcf8", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
232340819
pes2o/s2orc
v3-fos-license
Efficacy of Versajet hydrosurgery system in chronic wounds: A systematic review Abstract Studies demonstrating the effectiveness of hydrosurgery for chronic wounds are extremely limited. This systematic review aimed to evaluate the efficacy of hydrosurgery compared with conventional debridement in chronic wounds, skin ulcers, and non‐acute wounds. This PROSPERO‐registered review was performed following the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses statement. A systematic search was performed in PubMed, Scopus, and Cochrane Library databases. Abstracts of all studies were screened independently by two reviewers. The bias of prospective randomised controlled studies was assessed using the Cochrane Collaboration's tool for assessing the risk of bias and RevMan 5.4 software, whereas the bias of retrospective comparative studies was evaluated using the Risk of Bias Assessment Tool for Non‐randomised Studies. Two prospective randomised controlled trials, two retrospective comparative studies, and three prospective non‐comparative studies were included. Hydrosurgery enabled rapid debridement. The Versajet Hydrosurgery System saved 8.87 minutes compared with the conventional methods. Similarly, the debridement quality was high with this system. The debridement number needed to achieve adequate wound beds was fewer in the hydrosurgery group than in the conventional group. These superiorities lead to subsequent success and cost‐effectiveness. As there were only two prospective randomised controlled studies, and much information was missing, the risk of bias was unclear. This review confirmed that hydrosurgery is useful for the debridement of chronic wounds, considering the procedural speed and quality. The Versajet Hydrosurgery System (Smith and Nephew, Hull, UK, hereinafter shortened to hydrosurgery) 9 utilises a high-pressure parallel water jet that promotes the Venturi effect. It enables a surgeon to distinguish, excise, and evacuate non-viable tissues, bacteria, and contaminants tangentially from the wound surface. It can preserve more viable tissue than conventional surgical debridement and lead to less operative bleeding than conventional surgery. 10,11 Moreover, this technique can easily be performed to debride small spaces, such as the finger web space, which is difficult with conventional methods. 12 The usefulness of hydrosurgery in treating burn wounds has been widely reported, and a systematic review already confirmed its usefulness. 13,14 In this review, hydrosurgery allows for immediate skin grafting, high graft take rates, and faster healing in burn wounds. 15, 16 Legemate et al showed that hydrosurgery-treated patients underwent few surgical procedures and had a low mean volume of blood transfusion compared with conventional debridement. 17 However, studies demonstrating its effectiveness in chronic wounds are extremely limited compared with that in burns. Moreover, no systematic review of the usefulness of hydrosurgery for chronic wounds has been performed. Therefore, the purpose of this systematic review was to evaluate the efficacy of hydrosurgery compared with conventional debridement in chronic wounds, skin ulcers, and non-acute wounds by exploring all available evidence. | MATERIALS AND METHODS This systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. 18 The protocol of this review was submitted to PROSPERO, the international prospective registry of systematic reviews (University of York, UK) 19 on 12 June 2020 and registered on 11 July 2020 as CRD42020191743. | Eligibility criteria Several eligibility criteria were applied in this review. The inclusion criteria were as follows: 1. English full-text articles, including adults/children with chronic wounds, ulcers, and non-acute wounds. 2. Intervention with the Versajet or Versajet II Hydrosurgery System. 3. Relevant clinical outcomes and information on effectiveness, safety, and healthcare cost. 4. Prospective randomised controlled studies, retrospective comparative studies, and prospective noncomparative studies. In contrast, the exclusion criteria were as follows: | Search strategy A systematic search was performed in PubMed, Scopus, and Cochrane Library databases from 1 January 2000 to 10 August 2020. The search terms for articles from the database were "hydrosurgery," "hydrodebridement," "hyderscalpels," "water jet surgery," and "Versajet." We did not search these terms with "chronic wounds" or "ulcers." We removed the studies regarding burns or acute wounds manually to determine the type of wound. | Study selection All abstracts of studies retrieved from the database using the search strategy were screened independently by two reviewers who read and selected potentially eligible studies. The full text of these articles was collected, examined, and selected in accordance with the inclusion criteria. Key Messages • debridement is the most important procedure in the treatment of chronic wounds • hydrosurgical debridement with the Versajet Hydrosurgery System provides a high-pressure jet stream of saline to cut debris and keep the surgical field clean • this is the first systematic review evaluating the efficacy of hydrosurgery in chronic wounds • hydrosurgery is useful for the debridement of chronic wounds regarding the speed and quality of the procedure The following data were extracted from the studies: methods, participant profiles, types of intervention implemented for the study and control groups, and outcomes. Any disagreements between reviewers over the eligibility of particular studies were resolved by a third reviewer, who determined the inclusion of such studies. When a publication included relevant data from previous studies, the latest study was analysed. | Risk of bias in individual studies The level of evidence was determined according to the method of the Oxford Centre for Evidence-Based Medicine. 20 The bias of prospective randomised controlled studies was assessed using the Cochrane Collaboration's tool for assessing the risk of bias 21 and RevMan 5.4 software (ver.5.4, The Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark). 22 The bias of retrospective comparative studies was evaluated using the Risk of Bias Assessment Tool for Non-Randomised Studies. 23 Quality of prospective non-comparative studies was evaluated using the three-domain tool (selection, ascertainment, and reporting) for evaluating the methodological quality of case reports and case series 24 proposed by the Evidence-Based Practice Center, Mayo Clinic. We sent an e-mail to all authors asking for the detailed methods of the studies that were not described in the manuscript. Only one author responded; however, no answers were available regarding the detailed methods of the study. Bias was assessed by two of the authors independently. A third opinion was asked in case of disagreement between the authors, and a consensus was subsequently achieved. | Statistical analysis A meta-analysis was performed using RevMan software (ver.5.4, The Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark). 22 A random-effect model for outcomes was used. A P-value of .05 was used to determine statistical significance. | Included studies After excluding duplicates, 497 studies were extracted from the three databases, and 22 studies were identified after screening (by the evaluation of the titles and abstracts). Seven studies met the criteria of this review after the full-text screening. There were two prospective randomised controlled studies, 23,24 two retrospective comparative studies, 25,26 and three prospective non-comparative studies. [27][28][29] The PRISMA flow diagram is shown in Figure 1, and the study design and level of evidence of each study are shown in Table 1. The numbers of patients, wound types, and techniques compared are shown in Table 2. Study outcomes are shown in Table 3. The forest plot results of the time for the debridement procedure in the two prospective randomised controlled studies are shown in Figure 2 3.2 | Review of the effectiveness of hydrosurgery debridement | Procedure time Procedure time using hydrosurgery was reported in five studies, including two prospective randomised controlled studies. [23][24][25][26][27]29 The mean procedure time using hydrosurgery in these studies ranged between 5.8 and 12 minutes. The procedure time was significantly shorter with hydrosurgery than with the conventional methods in two prospective randomised controlled studies. The median areas of the devitalised areas of hydrosurgery/the control in the two studies were 5.3/3.7 cm 223 and 5.2/6.2 cm 2,24 and no significant difference was observed between the two groups in both studies. The results of the forest plot are shown in Figure 2. The mean difference in procedure time between the techniques was −8.87 minutes, and the procedure time was shorter using hydrosurgery than using the conventional methods. There was moderate heterogeneity. Granick et al's retrospective comparative study reported no statistical difference in total debridement time between the two methods. 25 | Quality of debridement The number of debridements needed to adequately prepare the wound bed for closure or secondary healing was evaluated in five studies, including one retrospective comparative study. [25][26][27][28][29] More than 70% of the cases in which hydrosurgery was used achieved adequate debridement in one session. The number of debridements was significantly fewer in the hydrosurgery group (median, one session) than in the conventional method group, according to Granick et al. 25 | Wound closure The period of wound closure was evaluated in five studies. 23,24,26,28,29 No statistical difference in the period of wound closure was observed between hydrosurgery and conventional methods in two prospective randomised controlled studies. 23,24 Pain associated with hydrosurgery debridement. Pain during the procedure was evaluated in two studies using the visual analogue scale. 26,28 Pain associated with hydrosurgery debridement was reportedly mild to moderate, and it was tolerable by the patients. | Bacterial count A bacterial analysis was performed in three studies. 24,26,29 In all studies, the bacterial load was reduced after hydrosurgery debridement; however, there was no difference compared with the conventional methods in a prospective randomised controlled study. 23,24 | Cost Cost analysis was performed in three studies. [23][24][25] Two studies showed potential cost savings using hydrosurgery 23,25 ; however, one study reported no difference between the methods. 24 F I G U R E 1 The preferred reporting items for systematic reviews and meta-analyses flow diagram adopted for the final selection of studies included in the review 3.2.6 | Other potential benefits of hydrosurgery Less saline use 23 and blood loss 24 were reported during the debridement procedure using hydrosurgery. | Safety outcome Several adverse events were reported; however, no device-related serious adverse event was observed. [25][26][27][28][29][30][31] 3.3 | Risk of bias within studies Figure 3 illustrates the risk of bias in the two prospective randomised controlled studies. The protocol for Caputo et al's study 25 was obtained from Clinical Trials.gov 32 (NCT00521027). The risk of bias in the two retrospective comparative studies is shown in Figure 4, and the results of the methodological quality case series evaluation are shown in Figure 5. As there were only two prospective randomised controlled studies out of the seven selected studies and much information was missing, the overall risk of bias was unclear. The major risks of bias involved an unclear study protocol and poor description of the inclusion/ exclusion criteria, which led to possible selection and detection biases. The method of outcome was not mentioned or appropriately described. | DISCUSSION As the number of patients with chronic wounds continues to increase, 33,34 it has become critical to improve these patients' outcomes. In particular, the increase in the incidence of diabetic foot ulcers is significant, and proper management of ulcers and avoidance of major amputations are essential for patients. 35 Wound bed preparation is the first step in the treatment of chronic wounds. 36,37 In recent years, this treatment concept has become widely known as "TIME". 36,38 As the first step in this process, the "T" stands for the assessment and debridement of nonviable or foreign materials (including host necrotic tissues, adherent dressing materials, multiple organism-related biofilms or sloughs, exudates, and debris) on the surface of the wound. After "T," "I"; controlling inflammation and infection, "M"; restoration of moisture balance, and "E"; wound edge advancement are followed. Hydrosurgery is a debridement device with several features. First, high-speed saline flows parallel to the wound surface, which allows for the removal of debris and other poor tissues. Second, the excised tissues, wound slough, and biofilms can be removed by the Venturi effect. 39 This material is suctioned into the handpiece, and this allows the wound surface to be cleaned and necrotic and infected tissues to be removed. Similarly, the debridement can be performed tangentially to the wounds, which is extremely useful for wound surfaces in chronic ulcers. Furthermore, the depth of one slice of debridement by hydrosurgery is much thinner than that by scissors or scalpels, 40 allowing more accurate debridement to be performed and more viable tissues be salvaged. There are only two systematic reviews of hydrosurgery in burn wounds, 13,14 and it reported no significant difference in efficacy between hydrosurgery and conventional methods. However, this review reported that there is evidence for immediate skin implantation after debridement, high skin engraftment, and faster healing, and there is fair and limited evidence concerning its cost-effectiveness. There are several points in our review that confirm the effectiveness of hydrosurgery in chronic wounds, and they are discussed in the following paragraphs. First, hydrosurgery enables rapid debridement. Although a relatively small area was debrided, the procedure time was reduced to 7-9 minutes using hydrosurgery. Hard tissues, such as third-degree burn wounds, are considered difficult to debride by hydrosurgery. 31,41 The hard eschar usually needs to be initially removed separately with scissors or scalpel debridement, followed by hydrosurgery. However, in chronic wounds, these hard, necrotic tissues are rarely present or already removed, and the wound bed is often soft and contains F I G U R E 2 Forest plot of the debridement procedure time. CI, confidence interval F I G U R E 3 Risk of bias summary of the prospective randomised controlled studies infected granulation tissues. Therefore, there is no need to change the tools for debridement. In addition, a clean, bloodless surgical field is always available because of the high-speed water jet that cleanses the wound. As debridement can be performed using only one device, hydrosurgery, and the clean surgical fields are maintained during the surgery, surgeons can perform rapid debridement. Moreover, the angled tip of the handpiece allows surgeons to perform debridement in small spaces or in pocket spaces that are difficult to debride by scissors or scalpels. 16,41 The quality of the debridement is also incredibly high. In the case of chronic wounds, multiple sessions of conventional debridement are often needed to achieve proper wound bed preparation. However, using hydrosurgery, only a single debridement achieves adequate wound beds in most cases. [27][28][29]31 The reason for this seems to be that bacterial contamination of the wound can be efficiently removed and cleaned by the water jet. 28,31 Moreover, the quality of the wound bed obtained with hydrosurgery creates a smoother, less-irregular wound surface, which allows immediate skin grafting. 42 F I G U R E 4 Risk of bias assessment tool for non-randomised studies F I G U R E 5 Methodological quality case series evaluation The rapid debridement and high-quality debridement are expected to be cost-effective and shorten patients' hospital stay. 27,28 Unfortunately, none of the literature in this study evaluated the overall cost to wound healing. New necrosis after debridement and graft loss because of infection were reported 28,29 ; however, they are common events after the debridement of chronic wounds. Therefore, there was no obvious device-related adverse event reported in the included studies. Cost analysis was performed in three studies, [23][24][25] and only two studies 23,25 showed potential cost savings because of the shorter procedure time with hydrosurgery or fewer procedures of debridement. Therefore, the overall cost of the treatment has not been closely examined. For this reason, future research on adequate cost scrutiny, especially the general cost of treatment, is warranted. To our knowledge, this is the second study reviewing the efficacy and safety of hydrosurgery and the first study reviewing the use of hydrosurgery for chronic wounds. The most important limitation of this review is the poor quality of the studies, which include relatively small sample sizes, unclear study designs, and a bias that cannot be ignored. | CONCLUSIONS Surgical debridement has an important role in the treatment of chronic wounds. From this review, we conclude that hydrosurgery provides rapid and effective debridement in chronic wounds, even though there is no difference between the periods of wound closure. However, high-quality studies are limited, and the number of cases included in each study was small. Therefore, further controlled trials need to be performed before hydrosurgery can become the standard care in the debridement of chronic wounds.
2021-03-25T06:16:39.224Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "e781e92505ee522895ce26f4e1726d2c48851095", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/iwj.13528", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67a39b21d199fab278fdf6f0d8da31d7d3bae937", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229719129
pes2o/s2orc
v3-fos-license
Left ventricular pseudoaneurysm: An unexpected finding A left ventricle pseudoaneurysm (LV PSA) is defined as a free wall rupture of the left ventricle contained by the adjacent pericardial tissue. This rare complication is most commonly encountered following myocardial infarction, trauma, or infection. Surgery is typically warranted to avoid progression to spontaneous rupture, which may potentially lead to cardiac tamponade and death. Cardiac magnetic resonance imaging is the modality of choice to characterize left ventricle morphology and function. Accurate distinction between a pseudoaneurysm and a true aneurysm is crucial, since management and prognosis are significantly different between these 2 entities. We present a case of a 63-year-old male heart transplant recipient, admitted for suspicion of acute cellular rejection, with an unexpected finding of a LV PSA. Introduction Left ventricle pseudoaneurysm (LV PSA) is defined as an outpouching formed by myocardial free wall rupture, with the extravasated contents contained by the adjacent pericardium and scar tissue [1][2][3][4][5] . LV PSAs are rare entities, most commonly attributed to free wall rupture of myocardial tissue due to infarction. Other causes include cardiac infection, cardiac surgery, and trauma. LV PSAs are seen after myocardial infarction in 0.2%-0.3% of cases, most commonly after large infarcts in elderly and male patients. The prognosis of an untreated cardiac pseudoaneurysm is poor, with high rupture rates, particularly in the early period following myocardial infarction [2] . By definition, pseudoaneurysms do not contain all layers of myocardial tissue, as opposed to true aneurysms, and must be differentiated for appropriate management. Clinical findings are usually nonspecific, such as chest pain, congestive heart failure, thromboembolic events, and arrhythmias. Sudden death is the least frequent presentation. A new to-and-fro murmur and electrocardiogram abnormalities are frequently seen [2] . Since the presentation is often nonspecific, a high degree of clinical suspicion is necessary to diagnose LV PSA. Diagnostic imaging is crucial to establish the diagnosis and guide appropriate treatment. Cardiac magnetic resonance (MR) and computed tomography are mainstays in anatomic characterization and differentiation from other etiologies such as a true left ventricular aneurysm. There are numerous imaging features described in the literature to assist in the accurate diagnosis and differentiation of LV PSA [4] . We present a case of a 63-year-old male with a prior orthotropic heart transplant admitted for suspicion of acute rejection, with an unexpected finding of a LV PSA. Case Report A 63-year-old male heart transplant recipient was admitted as a transfer from an outside hospital for suspicion of acute cellular rejection. The patient initially presented with general malaise, nausea, and vomiting for a week. At the outside hospital, he was found to have diabetic ketoacidosis, for which treatment with insulin infusion was instituted, and the patient ultimately developed anuric renal failure. Episodes of severe bradycardia were also reported, which did not require intervention and resolved with correction of acidosis and electrolyte abnormalities. The patient was then transferred for escalation of care and suspicion of acute cellular rejection. At admission, he reported substance abuse and noncompliance with immunosuppressor medications. Besides a heart transplant ten years before presentation, his past medical history also included coronary artery disease, diabetes mellitus, dyslipidemia, hypertension, and chronic kidney disease. Initial workup with echocardiography reported moderate to severe left ventricle hypertrophy. It also showed an outpouching at the apical segment of the left ventricle, which was connected to its cavity by a narrow neck. Color flow Doppler demonstrated a bidirectional shunt through the discontinuity at the apical segment of the left ventricle, further suggesting free wall rupture with pseudoaneurysm formation ( Fig. 1 A, B). A small amount of fluid and clot were noted in the pericardium without clear signs of cardiac tamponade ( Fig. 2 A, B). Further assessment with cardiac MR again demonstrated a large left ventricular apical outpouching with active extravasation through a narrow myocardial opening. The neck's maximal width was 7 mm, and the outpouching sac measured 48 mm in maximal diameter, with a neck to sac ratio of less than 0.5 (0.15), which is strongly suggestive of a pseudoaneurysm ( Fig. 3 A, B). Additionally, cardiac MR images demonstrated discontinuity of the myocardial wall, with greater than 50% decrease in the aneurysm sac wall thickness measured at 1 cm from the aneurysmal neck, another characteristic that supports LV PSA diagnosis. Moderate left ventricular hypertrophy was noted along with mild enlargement of the left ventricle and preservation of systolic function (ejection fraction of 61%). Moderate pericardial effusion was confirmed with mass effect upon the free wall of the right ventricle ( Fig. 4 A, B). Late gadolinium-enhanced images showed subendocardial enhancement of the mid to basal inferolateral septal wall, involving greater than 50% myocardial thickness, associated with myocardial thinning, reflecting scar from remote infarct and nonviability ( Fig. 5 A-C). The patient was previously admitted for acute cellular rejection grade 2 R 1 month prior to this presentation. At that time, echocardiography depicted left ventricular hypertrophy with reduced function without additional morphologic abnormalities. He was treated with thymoglobulin for four days, and repeated biopsy showed improvement of his rejection to grade 1 R. At the current admission, repeated right ventricle biopsy was negative for acute cellular rejection (0R), despite continued intermittent refusal of medications and treatment modalities. After a multidisciplinary meeting and thorough discussion of the patient's clinical and radiologic findings, the exact etiology of the patient's LV PSA remained uncertain. Cardiac MR demonstrated signs of previous myocardial infarction, including myocardial thinning and late gadolinium enhancement of the basal and mid inferoseptal wall, however there was no clear evidence of extension to the left ventricular apex. The cardiothoracic surgery team decided not to proceed with correction of the left ventricle pseudoaneurysm given the patient's high-risk clinical status, including history of acute graft rejection, thrombocytopenia, and reduced functional capacity. During admission, the patient's renal function and functional status progressively deteriorated, and he was ultimately referred to palliative care. Discussion LV PSA is defined as a free rupture of the myocardial wall contained by pericardial adhesion. This rare pathology is a complication that follows myocardial injury, most commonly myocardial infarction [1][2][3][4][5] . Clinically, findings are nonspecific, ranging from mild dyspnea to heart failure, and sudden cardiac death. Untreated pseudoaneurysms have a high risk of rupture, which remains even several years after diagnosis [4 ,6] . Various imaging modalities have been used to assess LV PSA. Historically, angiography was considered the modality of choice; however, currently, noninvasive techniques are preferred in most cases. On chest radiographs, a mass or abnormal contour of the heart may be seen, although the most common finding reported is simply heart enlargement [6] . Peripheral calcification of the pseudoaneurysm sac may be identified at later stages. On transthoracic echocardiography, LVA PSA usually presents as a focal outpouching with a narrow neck connecting the saccular pseudoaneurysm to the ventricular cavity. Color Doppler may aid in diagnosis by depicting aliasing and to-and-fro flow through the neck of the pseudoaneurysm. Additional findings may include pericardial effusion with variable degrees of echogenicity reflecting blood products and thrombosis [5 ,7 ,8] . Computed tomography (CT) provides excellent spatial resolution with accurate identification and morphologic assessment of the pseudoaneurysm sac. Additional findings may include pericardial effusion with variable attenuation values, chest parenchymal abnormalities characteristic of heart failure, and thromboembolic events [9] . Cardiac MR imaging is a valuable non-invasive technique that allows anatomic and functional characterization of the left ventricle. LV PSA typically shows loss of epicardial fat signal at its orifice. Cine cardiac MR may demonstrate myocar-dial wall dyskinesia and blood flow turbulence in the cardiac chambers and through the myocardial opening. Delayed gadolinium enhancement images may indicate late enhancement of pericardial tissue adjacent to the pseudoaneurysm sac in addition to the expected findings related to the patient's baseline etiology (eg, myocardial infarct, trauma, infection). Cardiac MR also presents an advantage in the assessment of thrombus, a commonly encountered complication of LV PSAs [3][4][5] . The primary differential diagnosis of LV PSAs is a true left ventricle aneurysm (LVA). In most LV PSA cases, surgery is warranted to avoid catastrophic outcomes such as spontaneous rupture with progression to pericardial tamponade and death. In contrast, true aneurysms have a better prognosis and may be treated conservatively in most cases [1][2][3][4][5][6] . Thus, accurate characterization and differentiation between these two entities are crucial for appropriate management and prognosis. Previous studies have described distinct imaging features to differentiate between true and false aneurysms of the left ventricle. One of the first documented imaging features described was localization and morphology. True LVAs typically have a wide neck, are often apical, may contain thrombus, and are rarely associated with pericardial enhancement. Pseudoaneurysms classically have a narrow neck, are most commonly inferior or lateral in location, often contain thrombus, and are commonly associated with pericardial enhancement. Some studies suggest an orifice-pseudoaneurysm diameter ratio of < 0.5 indicates a pseudoaneurysm rather than a true aneurysm, although these findings are controversial in the literature. Additionally, a recent study reported that a more than 50% decrease in aneurysm sac wall thickness measured at 1 cm from the aneurysmal neck is a sensitive and specific marker to diagnose LV PSA [4] . Despite these distinct characteristics, the differentiation between these two entities often presents a challenge to radiologists and cardiologists. Surgical treatment is typically warranted in acute cases of large and symptomatic LV PSAs. The benefits of surgery outweigh the risks of rupture in almost all cases. Previous studies suggest that conservative management can be considered in asymptomatic patients with small aneurysms (less than 3 cm of dimension) or increased surgical risk [10 ,11] . Conclusions In this case report, we demonstrated an atypical case of an apical LV PSA in a heart transplant recipient without a definitive finding of an underlying apical myocardial infarction. Although an accurate distinction between false and true aneurysms is crucial for appropriate management, this differentiation is often challenging clinically and radiologically. This case report describes the essential features of LV PSAs in different imaging modalities and highlights the importance of vigilance even in atypical clinical presentations.
2020-12-24T09:02:01.772Z
2020-12-22T00:00:00.000
{ "year": 2020, "sha1": "8f7a876af445b93cba6cb9793e96b28bf57588f2", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.radcr.2020.12.028", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82be398703a87f9c0d55209157e4d56211788d65", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10331681
pes2o/s2orc
v3-fos-license
Familial Interstitial Lung Disease in Two Young Korean Sisters Most of the interstitial lung diseases are rare, chronic, progressive and fatal disorders, especially in familial form. The etiology of the majority of interstitial lung disease is still unknown. Host susceptibility, genetic and environmental factors may influence clinical expression of each disease. With familial interstitial lung diseases, mutations of surfactant protein B and surfactant protein C or other additional genetic mechanisms (e.g. mutation of the gene for ATP-binding cassette transporter A3) could be associated. We found a 21 month-old girl with respiratory symptoms, abnormal radiographic findings and abnormal open lung biopsy findings compatible with nonspecific interstitial pneumonitis that is similar to those of her older sister died from this disease. We performed genetic studies of the patient and her parents, but we could not find any mutation in our case. High-dose intravenous methylprednisolone and oral hydroxychloroquine were administered and she is still alive without progression during 21 months of follow-up. INTRODUCTION Interstitial lung disease (ILD) is defined as a specific form of chronic fibrosing interstitial pneumonitis limited to the lung. It is a rare, chronic, progressive, usually fatal interstitial lung disorder (1)(2)(3). In children, interstitial pneumonitis (IP) presents with a wide spectrum of histologic abnormalities that usually do not fit the classification for IP used in the adult population (4). IP is a group of very rare diseases in childhood (5). The familial form can be described, as IP occurred in at least two members of a family (2,3,6). The proportion of familial ILD is unknown, and a genetic basis of it is estimated to be 0.5-2.2% of cases (6). In approximately half of the clinically unaffected family members, it is transmitted as an autosomal dominant trait with reduced penetrance but autosomal recessive pattern is not excluded (3,(6)(7)(8). Although the etiology of the majority of ILDs is still unknown, there is increasing understanding of the cellular and cytokine interactions associated with inflammation and fibrosis. Host susceptibility, genetic factors and environmental cofactors may influence clinical expression of each disease (9). The inflammatory process of ILD begins with an initial injury to the alveolar and interstitial structures (alveolitis), which is followed by a stage of tissue repair and variable degrees of fibrosis, and an alveolar infiltrate with variable amounts of proteinaceous material (4,10). The accumulation of proteinaceous material in the alveolar space is a characteristic finding in lung diseases associated with surfactant system abnormal-ities (4). There are reports that mutations in surfactant protein B (SP-B) and surfactant protein C (SP-C) are associated with familial usual interstitial pneumonitis in adults and with cellular nonspecific interstitial pneumonitis (NSIP) in children (6). Recently, a mutation in the gene encoding the hydrophobic, lung-specific SP-C was discovered in association with a decrease in the level of SP-C in familial ILDs (4,10,11). Here we report a very young Korean girl with symptoms and radiological findings of ILD similar to those of her older sister who died from this disease. Open lung biopsy showed that the patient had NSIP. Chest computed tomography (CT) scans of both sisters were compared and the findings, etiology and treatment of this disease are discussed. CASE REPORT A 21 month-old girl was brought to Asan Medical Center suffering fever, cough and tachypnea for 4 months. Abnormal lung findings were first noted by a chest radiography taken due to fever when she was 8 months old. No further evaluation was performed at that time. The symptoms of the patient at 21 months of age were the same as those that her sister had shown at age 4 yr. Physical examination of the patient revealed tachypnea at rest (beyond 60/min) and chest retraction without rales or wheezing, but no digital clubbing. On arrival at the hospital, the Denver symptom score (12) of ILD gave a value of 2 points meaning the patient showed respiratory symptoms Familial Interstitial Lung Disease in Two Young Korean Sisters Most of the interstitial lung diseases are rare, chronic, progressive and fatal disorders, especially in familial form. The etiology of the majority of interstitial lung disease is still unknown. Host susceptibility, genetic and environmental factors may influence clinical expression of each disease. With familial interstitial lung diseases, mutations of surfactant protein B and surfactant protein C or other additional genetic mechanisms (e.g. mutation of the gene for ATP-binding cassette transporter A3) could be associated. We found a 21 month-old girl with respiratory symptoms, abnormal radiographic findings and abnormal open lung biopsy findings compatible with nonspecific interstitial pneumonitis that is similar to those of her older sister died from this disease. We performed genetic studies of the patient and her parents, but we could not find any mutation in our case. High-dose intravenous methylprednisolone and oral hydroxychloroquine were administered and she is still alive without progression during 21 months of follow-up. but oxygen saturation was normal in room air under all conditions. Blood, urine, sputum and stool studies revealed no evidence of acute viral or bacterial infection. Neither virus (adenovirus, influenza virus, parainfluenza virus, respiratory syncytial virus, cytomegalovirus) nor bacteria was found in bronchoalveolar lavage fluid. The chest radiography showed a dense hazy area at the central region of both lungs and blunting on the left costophrenic angle. Chest CT (Fig. 1A) demonstrated diffuse fibrosis on the medial portion of both lung fields and subpleural consolidation along both lateral pleura. Based on the similar findings from chest CT (Fig. 1B), the sister of the patient had been diagnosed with uncharacterized ILD, and died 5 months after diagnosis. Surgical open lung biopsy (Fig. 2) was performed at 22 months of age. A diagnosis of NSIP was made based on the findings of diffuse, uniform thickening of the interstitium with lymphoplasmacytic infiltration and collagen fibrosis. Some alveoli contain accumulation of intra-alveolar macrophages, while some alveoli had hyperplastic type II alveolar pneumocytes. Genetic studies were performed with the lung biopsy specimens and peripheral blood mononuclear cells. These revealed no identifiable mutation in the genes encoding surfactant proteins. Examination of peripheral blood cells from the patient's parents also showed no evidence of mutations in these genes. The patient was treated with high-dose intravenous methylprednisolone (30 mg/kg/day, 3 doses every other day, monthly) and oral hydroxychloroquine (daily). During treatment, tachypnea and dyspnea on exertion were A B still evident and the Denver symptom score was 2 points at follow-up. Follow-up high resolution CT (HRCT) showed that there was no further disease progression after 21 months. DISCUSSION There have been previous reports of familial ILD, and infants with this disease can become symptomatic in the first 6 weeks of life (1). Such patients have tachypnea and respiratory distress with grunting, intercostal and subcostal retraction, and cyanosis. The mortality rate is high, especially in children younger than 1 yr (1). The ILD of unknown origin which was developed in the present patient's sister progressed quickly and she died at 4 yr of age, 5 months after she had been diagnosed. Risk factors for ILD are virus (adenovirus, Epstein-Barr virus, influenza virus, cytomegalovirus), mycoplasma or other infectious agents, drugs, chronic aspiration, environmental factors like metal dust and wood dust, and genetic predisposition (5,13). Given that investigations on the etiology of the present patient ruled out any environmental or infectious causes, we examined genetic factors. There were analyses of candidate loci near the human leukocyte antigen (HLA) region of chromosome 6 to suggest a genetic basis for ILD in familial cases (6). In addition, an association has been established between interstitial pulmonary fibrosis and 1-antitrypsin inhibition alleles present on chromosome 14 (2). Recently, several investigators reported that multiple heterozygous mutations in the surfactant protein C gene were associated with ILD (10). Triggers such as infection or toxins may contribute to the wide diversity in clinical presentation of surfactant protein C gene-associated pulmonary fibrosis (6). In our case, the pathological findings were compatible with NSIP but we did not find a surfactant protein gene mutation in lung biopsy samples or peripheral blood cells from the patient, or in peripheral blood cells from her parents. There was a report of fatal respiratory diseases in full-term infants with symptoms of surfactant deficiency in whom a deficiency of surfactant protein B was excluded. It suggested that the mutation of the genes for ATP-binding cassette transporter A3 (ABCA3) that were involved in the transport of phospholipids and sterols could be associated with unexplained surfactant deficiency in full-term infants (14). HRCT allows early diagnosis of ILD and commonly shows patchy, predominantly peripheral, subpleural, bibasilar reticular abnormalities (15). The extent of pulmonary infiltration on CT is an important predictor of survival (2). We had two similar HRCT images, one from the patient and the other from her sister whose disease had progressed further. The diagnosis of ILD can be confirmed by open lung biopsy (16). The chronic pneumonitis in infancy is characterized by interstitial thickening with mesenchymal cells rather than inflammatory cells and by an alveolar infiltration with vari-able amounts of proteinaceous material (10,15). The findings of the lung biopsy specimen from the present patient also showed mild, diffuse, and uniform thickening of the interstitium with mild fibrosis, consistent with the diagnosis of NSIP. Corticosteroid administration (1,16,17) has been the mainstay of therapy for ILD, and chloroquine and hydroxychloroquine (1,17) have also been used successfully in the treatment of childhood IP. However the exact mechanism of action of this treatment is unknown. For the present patient, treatment with methylprednisolone and hydroxychloroquine was initiated soon after ILD diagnosis due to her family history. The Denver symptom score was used to evaluate the symptoms, and continued use of this system indicated that patient's disease was not progressing during 21 months of follow-up. In summary, we have described an infant with NSIP whose sibling died from a similar disease. The need of early diagnosis and treatment with corticosteroids in combination with hydroxychloroquine for the familial ILD could be considered. Trying to determine the etiology of the familial form of this disease is likely to result in better treatment for these patients.
2016-05-12T22:15:10.714Z
2005-12-01T00:00:00.000
{ "year": 2005, "sha1": "a169c099e0609c26618c43d3f82a093764e6ba1e", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2779311?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "193fd021fd576fca2120b264b689fb14d9b27335", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211170765
pes2o/s2orc
v3-fos-license
Association between clinic physician workforce and avoidable readmission: a retrospective database research Background To reduce hospitalization costs, it is necessary to prevent avoidable hospitalization as well as avoidable readmission. This study aimed to examine the relationship between clinic physician workforce and unplanned readmission for ambulatory care sensitive conditions (ACSCs). Methods The present study was a retrospective database research using nationwide administrative claims database of acute care hospitals in Japan. We identified patients aged ≥65 years who were admitted with ACSCs from home and discharged to home between April 2014 and December 2014 (n = 127,209). The primary outcome was unplanned readmission for ACSCs within 30 or 90 days of hospital discharge. A hierarchical logistic regression model was developed with patients at the first level and regions (secondary medical service areas) at the second level. Results The 30-day and 90-day ACSC-related readmission rates were 3.7 and 4.6%, respectively. The high full-time equivalents (FTEs) of clinic physicians per 100,000 population were significantly associated with decreased odds ratios for 30-day and 90-day ACSC-related readmissions. This association did not change even when sensitivity analyses was conducted. Conclusions Among patients who had history of admission for ACSCs, greater clinic physician workforce prevented the incidence of readmission because of ACSCs. Regional medical plans to prevent avoidable readmissions should incorporate policy interventions that focus on the clinic physician workforce. Background The Japanese population is the most rapidly aging in the world [1]; approximately 28% of the population (or 35.6 million people) are aged ≥65 years [2]. The > 65 s account for 60% of the national medical care expenditure [3], and 86% of older adults have at least one chronic disease [4]. Hospitalization costs for chronic diseases and their complications are major contributors to increased medical expenses [5]. A previous study showed that the risk of avoidable hospitalization was higher among individuals who were aged ≥65 years compared with younger individuals [6]. The prevention of avoidable hospitalization of older adults is a viable strategy to reduce hospitalization costs. The concept of ambulatory care sensitive conditions (ACSCs) is often used in the context of avoidable hospitalization [7]. ACSCs are defined as conditions for which appropriate outpatient care or early intervention (to prevent complications or more severe disease) can prevent the need for hospitalization [8]. ACSCs include diabetic complications, congestive heart failure, chronic obstructive pulmonary disease, bacterial pneumonia, and urinary tract infections. Readmission is a common phenomenon and imposes a burden on the medical system [9]. In England and the United States, there is increasing impetus for efforts to prevent readmission for ACSCs [10]. Because appropriate outpatient care can prevent readmission for ACSCs, it is necessary to prevent both avoidable hospitalization and avoidable readmission. Most previous studies that have examined the relationship between primary care physician per population and hospitalization for ACSCs have revealed lower hospitalization rates for ACSCs in areas with greater access to primary care [7]. However, to the best of our knowledge, no studies have examined the relationship between primary care physician workforce and readmission for ACSCs. Therefore, the primary objective of this study was to examine the relationship between the clinic physician workforce involved in primary care per population and unplanned readmission for ACSCs among older people. Data source The Diagnosis Procedure Combination (DPC) system is a case-mix classification system used in Japan for reimbursements to acute care hospitals under the public medical insurance scheme. We used data obtained from the DPC database which contains administrative claims data and discharge clinical summaries. The data were collected by the DPC research group from voluntarily participating hospitals, which account for approximately 50% of acute care hospitals in Japan [11]. The data were electronically collected through the uniform format stipulated by the Japanese Ministry of Health, Labour and Welfare for comparative analysis of many hospitals throughout Japan. The DPC database includes data pertaining to the following variables: hospital identifiers; patient demographics; zip code of the patient residence; major diagnoses, comorbidities present at admission and complications after admission recorded using the International Classification of Disease, 10th edition (ICD-10) codes; disease severity; length of stay; medications, surgical and interventional procedures with their specific dates of prescription and implementation; and discharge status. We also utilized data collected by the Ministry of Internal Affairs and Communications regarding the population, data collected by the Ministry of Land, Infrastructure, Transport and Tourism regarding residential area, and data collected by the Ministry of Health, Labour, and Welfare regarding the number of physicians and hospital beds. Subject inclusion and exclusion criteria We identified patients who fulfilled the following criteria: 1) patients aged ≥65 years at the time of admission; 2) patients admitted unexpectedly with ACSCs between April 1 and December 31, 2014; 3) patients with no history of hospitalization within the past year; 4) patients admitted from home and discharged to home. Patients were followed up until March 31, 2015. For individuals who had ≥2 admissions during the study period, the first admission was considered as the index hospitalization. Patients with missing data pertaining to independent variables were excluded. Statistical analysis The primary outcome variable was unplanned readmission for ACSCs within 30 days or 90 days of hospital discharge. ACSCs were those defined by Bardsley et al. [12]. Using 30-day or 90-day unplanned readmission for ACSCs as the dependent variables, a hierarchical logistic regression model was developed with patients at the first level and regions (secondary medical service areas) at the second level. We utilized a random intercept model with regions as random effects. A two-sided significance level of 0.05 was used, and all statistical analyses were performed using R software version 3.4.1 (R Foundation for Statistical Computing, Vienna, Austria). Patient-level variables The patient-level variables were: age; sex; comorbidities; body mass index (BMI); Barthel index at discharge; surgery under general anesthesia, epidural anesthesia or spinal anesthesia; length of hospital stay; and schedule of implementation of home care program after discharge. Age was grouped into four categories: 65-74 years, 75-84 years, 85-94 years, and ≥ 95 years. Comorbidities were those defined by Elixhauser et al. [13]. Only those comorbid conditions that affected at least 1% of subjects were included. Consequently, the following 19 comorbid conditions were included: uncomplicated hypertension; uncomplicated and complicated diabetes; cardiac arrhythmias; congestive heart failure; chronic pulmonary disease; solid tumor without metastasis; fluid and electrolyte disorders; renal failure; peptic ulcer disease excluding bleeding; valvular disease; liver disease; deficiency anemia; peripheral vascular disorders; other neurological disorders; rheumatoid arthritis or collagen vascular diseases; blood loss anemia; hypothyroidism; and depression. Region-level variables The region-level variables used were the full-time equivalents (FTEs) of clinic physicians per 100,000 population, the FTEs of hospital physicians per 100,000 population, the number of hospital beds per 100,000 population, and the population density of inhabitable areas (population/ inhabitable area ratio) in the secondary medical service areas of residence of each subject. The secondary medical service areas are subprefectural regions comprising of several municipalities [14]. These governmentstipulated regions are designed to provide comprehensive inpatient, outpatient, and long-term care in consideration of geographical conditions and access to necessities of daily life for residents. The region-level variables were represented by dummy variables indicating four quartiles in each of the secondary medical service areas, referring to a previous study [15]. We chose quartiles rather than a continuous variable due to the nonlinear relationship between the region-level variables and the outcomes. We also performed three sensitivity analyses. In the first sensitivity analysis, the region-level variables were represented by dummy variables indicating three tertiles and five quintiles in each of the secondary medical service areas instead of four quartiles. In the second sensitivity analysis, we used the number of clinic physicians per 100,000 population and the number of hospital physicians per 100,000 population in the secondary medical service areas instead of the respective FTEs. In the third sensitivity analysis, we restricted the target population to patients for whom the referral letter to clinic was issued during their hospital stay. Results A total of 127,209 patients from 1162 hospitals and 344 secondary medical service areas were included in the analysis. Figure 1 shows a schematic illustration of the patient selection process. Additional file 1: Table S1 shows the thresholds defining the quartiles of regionlevel variables in 344 secondary medical service areas. Table 1 presents the descriptive statistics for the study variables. The 30-day and 90-day ACSC-related readmission rates were 3.7 and 4.6%, respectively. The mean age at index hospitalization was 78.3 (SD = 7.9) years; approximately 54% patients were male. Most patients were able to perform the activities of daily living (ADL) independently. Approximately 40% patients were affected by uncomplicated hypertension. Table 2 presents the results of hierarchical logistic regression analysis for 30-day and 90-day ACSC-related readmissions. The 3rd and 4th quartiles of FTEs of clinic physicians per 100,000 population were independently associated with decreased odds ratio for 30-day and 90day ACSC-related readmissions. The following factors showed a significant association with increased risk of 30-day and 90-day ACSC-related readmissions: age (≥75 years), male sex, BMI < 18.5 kg/m 2 , low ADL function at discharge, length of stay, schedule of implementation of home care program after discharge, uncomplicated diabetes, complicated diabetes, cardiac arrhythmia, congestive heart failure, chronic pulmonary disease, solid tumor without metastasis, renal failure, valvular disease, liver disease, rheumatoid arthritis/collagen vascular diseases, and hypothyroidism (Additional file 1: Table S2). Tables 3 and 4 present the results of sensitivity analysis using region-level variables categorized into three tertiles and five quintiles in each of the secondary medical service areas instead of four quartiles, respectively. The high FTEs of clinic physicians per 100,000 population showed a significant association with a decreased risk of 30-day and 90-day ACSC-related readmissions. Table 5 presents the results of sensitivity analysis using the number of clinic physicians per 100,000 population and the number of hospital physicians per 100,000 population in the secondary medical service areas instead of their FTEs. The association between the number of clinic physicians per 100,000 population and the 30-day/90-day ACSC-related readmission became weaker but was statistically significant. Table 6 presents the result of sensitivity analysis wherein the target population was restricted to patients for whom the referral letter to clinic was issued during their hospital stay. A total of 48,832 patients were included in this sensitivity analysis. The 4th quintile of FTEs of clinic physicians per 100,000 population was significantly associated with decreased risk of 30-day and 90-day ACSC-related readmissions. Discussion To the best of our knowledge, this is the first study to evaluate whether the FTEs of clinic physicians per 100, 000 population affects the incidence of unplanned readmission for ACSCs. In this nationwide study, increase in clinic physician workforce was associated with a lower risk of readmission for ACSCs within 30 and 90 days. These findings did not change even in the sensitivity analyses. In previous studies, higher number of primary care physicians was associated with lower hospitalization rates for ACSCs [7]. In the present study, adequate availability of physicians involved in primary care was associated with lower risk of admission as well as readmission for ACSCs. A systematic review examined the organizational aspects of primary care that contribute to the reduction in avoidable hospitalization; the results showed that adequate supply of primary care physicians and long-term relationship between primary care physicians and patients helped reduce hospitalization for chronic ACSCs [16]. Our study also suggests that long-term relationship between physicians and patients helps reduce readmission for ACSCs. In Japan, most hospitals have outpatient departments and physicians working at both hospitals and clinics provide primary care services. It is, therefore, necessary to take into consideration the primary care function of hospitals when examining the association between clinic physician workforce and readmission for ACSCs. According to the data collected by the Japanese Ministry of Health, Labour and Welfare (2014), the average number of clinic physicians per clinic was 1.3 [17]. Therefore, it is possible that regular patient visits to a specific clinic promote more robust physician-patient relationship as compared with regular visits to a specific hospital outpatient department. Consequently, patients who resided in an area with higher workforce of clinic physicians showed a lower risk of readmission for ACSCs in this study. To take into consideration the primary care function of hospitals, we took two measures in our analyses. First, we included FTEs of hospital physicians per 100,000 population as explanatory variables. The FTEs of hospital physicians may reflect the workforce serving both inpatients and outpatients at hospitals. In our result, the negative association between FTEs of clinic physicians and readmission for ACSCs was identified even after adjusting for FTEs of hospital physicians. Second, we performed sensitivity analysis wherein we restricted the target population to patients whose referral letter was issued during their hospital stay. In most cases, the referral letter issued during hospitalization requests for ongoing outpatient care at the clinic where patient had regular visit before the hospitalization. In the sensitivity analysis, therefore, we restricted the target population to patients who regularly visited their clinic after discharge; the results revealed a negative association between FTEs of clinic physicians and readmission for ACSCs. In the other sensitivity analysis, we used the number of clinic physicians per 100,000 population instead of their FTEs. The association between supply of clinic physicians and risk of readmission for ACSCs became weaker as compared with that observed with use of FTEs of clinic physicians per 100,000 population. A previous study identified a stronger association between FTEs of primary care physicians per 10,000 population and ACSC hospitalizations compared with number of primary care physicians per 10,000 population; in addition, FTEs of primary care physicians provided a more accurate reflection of the availability of primary care physician compared with their number [18]. These findings are consistent with our results. Some limitations of our study should be acknowledged. First, because of the nature of the DPC database, our estimation of the risk of readmission was limited to patients who were re-hospitalized at the same hospital as the index hospitalization. Therefore, the risk of readmission may have been underestimated. However, according to the study conducted at a Japanese hospital, the readmission rate of older patients with heart failure within 30 days and 1 year was 6-8% and 18-23%, respectively [19]. In another study based on data from the Japanese cardiac registry of heart failure in cardiology (JCARE-CARD), the readmission rate of patients with chronic heart failure for acute exacerbation within 1 year was approximately 25% [20]. In this DPC database, readmission rate of patients with heart failure within 30 days and 1 year was 6.6 and 22.8%, respectively (data not shown). Consequently, our estimation of the risk of readmission seems reasonable. Second, we utilized clinic physician workforce as covariates; however, we could not determine the number of clinic physicians disaggregated by specialty from the data collected by the Ministry of Health, Labour and Welfare. Although we could not use the number of physicians specializing in general internal medicine or family medicine as covariates, ACSCs include diseases that require treatment in other clinical departments, such as dermatology, otorhinolaryngology, obstetrics and gynecology, or dentistry [12]. Furthermore, there is no general practitioner system in Japan and most physicians become specialists [21]. Therefore, primary care is often provided by different specialists [22]. Consequently, the utilization of clinic physician workforce as covariates may be acceptable. Third, information about patients' families and caregivers was not available from the database. The caregivers' ability to care is liable to influence the risk of readmission and may have confounded our results.
2020-02-20T00:37:49.333Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "04ec2dbf244e23371d60e6726156f5d48282243e", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-020-4966-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04ec2dbf244e23371d60e6726156f5d48282243e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254094397
pes2o/s2orc
v3-fos-license
An international validation of the AO spine subaxial injury classification system To validate the AO Spine Subaxial Injury Classification System with participants of various experience levels, subspecialties, and geographic regions. A live webinar was organized in 2020 for validation of the AO Spine Subaxial Injury Classification System. The validation consisted of 41 unique subaxial cervical spine injuries with associated computed tomography scans and key images. Intraobserver reproducibility and interobserver reliability of the AO Spine Subaxial Injury Classification System were calculated for injury morphology, injury subtype, and facet injury. The reliability and reproducibility of the classification system were categorized as slight (ƙ = 0–0.20), fair (ƙ = 0.21–0.40), moderate (ƙ = 0.41–0.60), substantial (ƙ = 0.61–0.80), or excellent (ƙ = > 0.80) as determined by the Landis and Koch classification. A total of 203 AO Spine members participated in the AO Spine Subaxial Injury Classification System validation. The percent of participants accurately classifying each injury was over 90% for fracture morphology and fracture subtype on both assessments. The interobserver reliability for fracture morphology was excellent (ƙ = 0.87), while fracture subtype (ƙ = 0.80) and facet injury were substantial (ƙ = 0.74). The intraobserver reproducibility for fracture morphology and subtype were excellent (ƙ = 0.85, 0.88, respectively), while reproducibility for facet injuries was substantial (ƙ = 0.76). The AO Spine Subaxial Injury Classification System demonstrated excellent interobserver reliability and intraobserver reproducibility for fracture morphology, substantial reliability and reproducibility for facet injuries, and excellent reproducibility with substantial reliability for injury subtype. Introduction The AO Spine Subaxial Injury Classification System was designed as a potential tool to help guide management of traumatic subaxial cervical spine injuries. Although subaxial spine injury classifications have existed since the 1970s, they have predominantly relied on anatomic descriptions of injury mechanisms resulting in limited clinical utility [1][2][3]. Furthermore, previous classifications designed to help guide injury management have failed to gain global adoption secondary to poor reliability [4]. The AO Spine Subaxial Injury Classification System was therefore developed with the goal of prognosticating injury severity and creating a classification with good interobserver reliability and intraobserver reproducibility. To accomplish this, the classification system groups traumatic subaxial cervical spine lesions based on their morphology into A (stable-compression), B (potentially unstable-tension band), and C (unstable-translational) type injuries and includes a classification system of associated facet joint injuries. Morphologic injury types are further subdivided hierarchically into subtypes based on stability and injury severity [5]. In this manner, AO Spine created a concise yet comprehensive injury classification system with previous validation studies by the AO Spine Knowledge Forum Trauma group demonstrating substantial interobserver reliability and intraobserver reproducibility [6]. However, large-scale studies demonstrating the high reliability and reproducibility of the classification system are necessary. A number of previous studies have aimed at validating subaxial cervical spine injury classifications, but they routinely rely on a small subset of validation members [7,8]. The utilization of large study groups or international spine organizations is one method to increase the generalizability of fracture classifications, but utilization of these groups has been infrequently reported in the cervical spine literature [9]. Further, no previous study has attempted to validate a subaxial cervical spine fracture classification, while including hundreds of validation members. Therefore, the primary goal of the study was to determine the reliability and reproducibility of the AO Spine Subaxial Injury Classification System via an open call to all participating AO Spine members. Methods A live webinar conference was hosted for validation of the AO Spine Subaxial Injury Classification System in 2020. All AO Spine members were invited to participate. Prior to participation, each member attended a live tutorial video and training session directed by one of the creators of the fracture classification. The conference was conducted in English. In this validation, 203 AO Spine members from six different geographic regions of the world (North America, Central and South America, Europe, Africa, Asia and the Pacific, and the Middle East) elected to participate in reviewing computed tomography (CT) videos of 41 distinct subaxial cervical spine injuries. The CT videos consisted of high-resolution sagittal, axial, and coronal videos. Each CT had a viewing range limited to the area of injury. At the same time, each participant was able to view key images of the injury. The videos were presented to the validation members in a randomized order (assessment 1). Each validation member was tasked with classification of each subaxial cervical spine injury based on the AO Spine Subaxial Injury Classification System, which included injury morphology (A, B, C), injury subtype (A1, A2, B1, etc.), and presence of a facet injury (Fig. 1). After 3 weeks, each participant attended a second live webinar to evaluate the same CT videos (with a new randomized order) and re-classify them (assessment 2). All answers were recorded in an online survey. Demographic data including nationality, surgical subspecialty (orthopedic spine, neurosurgery, or other), and years of experience (< 5, 5-10, 11-20, and > 20) were recorded. Statistical analysis A chi-square test was used to evaluate significant differences in the demographic data. Agreement percentages were used to compare the validation member's classification grade to the "gold standard," defined by a panel of expert spine surgeons and traumatologists who came to unanimous agreement on the classification of the injury. Cohen's Kappa (ƙ) statistic was used to assess the reproducibility and reliability of the injury morphology (A, B, or C), injury subtype (A1, A2, A3, etc.), and facet injury (F1, F2, F3, or F4) classification between independent observers (interobserver reliability) and the reproducibility of the injury classification over two assessments (intraobserver reproducibility). The ƙ coefficients were interpreted using the Landis and Koch grading system [10]. A ƙ coefficient of less than 0.2 was defined as slight, between 0.21 and 0.4 as fair, between 0.41 and 0.6 as moderate, between 0.61 and 0.8 as substantial, and greater than 0.8 as excellent reliability or reproducibility. Results A total of 203 validation members elected to participate in the AO Spine Subaxial Injury Classification System. A significantly greater proportion of validation members lived in Europe (40%) and Asia (24.6%) with the remaining from Central or South America (16.7%), North America (8.9%), the Middle East (7.4%), and Africa (2.5%) (p < 0.001). Most validation members were orthopedic surgeons (60.6%) or neurosurgeons (36.9%) with only five members identifying as "other" physicians (2.5%) (p < 0.001). The "other" group consisted of residents and radiologists (Table 1). Percent agreement with gold standard Percent agreement for fracture morphology on assessment 1 (AS1) and assessment 2 (AS2) was 95.4 and 94.7%, respectively. Percent agreement for fracture subtype (AS1: 91.7%, AS2: 90.6%) was lower than the percent agreement for fracture morphology, but similar to the percent agreement for facet injury (AS1: 88.6%, AS2: 91.3%). Additionally, the validation members had minimal variability in correctly identifying each fracture morphology [range, 87. We subsequently reclassified the interobserver reliability based on surgeon experience, surgical subspecialty, and geographic region to determine if a surgeon's region of practice, surgical specialty, or experience level resulted in variability in the interobserver reliability of the injury classification. Surgeon experience did not affect interobserver reliability for fracture morphology [range, AS1: 0.83-0.89, AS2: (Table 4). Similar to the interobserver reliability, intraobserver reproducibility was reclassified based on surgeon experience, surgical subspecialty, and geographic region to determine if these factors influenced reproducibility. Although surgeons with 11-20 years' experience had slightly higher intraobserver reproducibility in fracture morphology Table 6). Discussion The international validation of the AO Spine Subaxial Injury Classification System resulted in classification accuracy of greater than 90% for fracture morphology and fracture subtype on both assessments and demonstrated excellent interobserver reliability and intraobserver reproducibility for fracture morphology, substantial to excellent reliability and reproducibility for fracture subtypes, and substantial reliability and reproducibility for facet injuries. Further, each fracture morphology type (A, B, and C), fracture subtype (A1, B1, C1, etc.) and facet injury type (F1, F2, F3, and F4) had at minimum substantial reliability and reproducibility indicating the system may be universally applied across all subaxial cervical spine injuries. Overall, the results from this international validation study support the utilization of the AO Spine Subaxial Injury Classification System as a tool to communicate subaxial cervical spine injury patterns on a global scale. The first study to validate the AO Spine Subaxial Injury Classification System was a pilot study that consisted of ten AO Spine Knowledge Forum Trauma members [6]. Their validation study demonstrated the classification had substantial interobserver reliability for injury subtypes (ƙ = 0.64) and injury morphology (ƙ = 0.65) with substantial intraobserver reproducibility for injury morphology (ƙ = 0.77) and injury subtype (ƙ = 0.75) [6]. The AO Spine pilot study combined facet injuries into fracture morphology (A, B, C, and F) and injury subtypes (A1, B1, C1, F1, etc.) making a direct comparison between the international validation study and the pilot study groups difficult. However, when comparing the AO Spine pilot group's facet injury interobserver reliability (ƙ = 0.66) to the international validation group's facet injury reliability (AS1: 0.67, AS2: 0.74) both Table 4 Interobserver reliability of 2020 validation Mean kappa values are sorted by surgeon experience, subspecialty, and region of practice validation groups had a similar substantial reliability. It can also be reasonably assumed that the AO Spine pilot study had similar intraobserver reproducibility (ƙ = 0.75) compared to the international validation after accounting for the separation of fracture morphology reproducibility (ƙ = 0.85) and facet injury reproducibility (ƙ = 0.76). 6 Given the disparate injury morphology reliability between the international group and AO Spine pilot study group, it is unlikely inclusion of facet injuries alone accounted for the large gap in reliability (ƙ = 0.87 vs. 0.65, respectively). While substantial, the reproducibility for facet fracture classification remains lower than that of fracture subtype and morphology. This is likely secondary to difficulties distinguishing between F1 and F2 which are commonly misdiagnosed for one another. Reproducibility would likely improve with CT scan imaging with 1 mm cuts [11]. A couple of reasons may explain why the international validation results had a higher injury morphology reliability when compared to the pilot study. First, the international validation group had 203 participants, compared to the AO Spine pilot study that had 10 participants. This improves the margin of error for a participant who has difficulty applying the classification to cervical spine injuries. Perhaps more importantly, the classification system was available for global use five years prior to the international validation study, giving participants time to utilize the classification system in their spine practice before participating in the international validation. Even though our results suggest there is no correlation between surgeon experience and improved AO Spine Subaxial Injury Classification System reliability or reproducibility, no study has examined if increased application of the classification to cervical spine injuries improves a participants accuracy. Of note, no previous study has found a correlation between surgeon experience and the reliability and reproducibility of different AO Spine classifications [12,13]. A neurosurgery and orthopedic spine attending and three neurosurgery residents performed a separate independent validation of the AO Spine Subaxial Injury Classification System [14]. The intraobserver reproducibility for injury morphology was excellent for both attending spine surgeons (ƙ = 0.86 and 0.95, respectively), and substantial for residents (ƙ = 0.66-0.75) [13]. This held true for injury subtypes with spine surgeons demonstrating excellent reproducibility (ƙ = 0.80 and 0.93, respectively), and residents demonstrating substantial reproducibility (ƙ = 0.63-0.67). When evaluating injury morphology and injury subtype reliability, kappa coefficients ranged from moderate (morphology: ƙ = 0.52 vs. subtype: ƙ = 0.51) on assessment 1 to substantial (morphology: ƙ = 0.63 vs. subtype: ƙ = 0.60) on assessment 2 [14]. The contrast in injury morphology reliability between neurosurgery residents and attending surgeons suggests additional use of the classification may improve its accuracy and the importance of clinical experience in understanding nuanced spinal anatomy and fracture patterns, but future studies are required to confirm this finding. The AO Spine Latin America Trauma Study group also validated the reliability of facet injuries based on the AO Spine Subaxial Injury Classification System and found surgeons practicing in South America compared to Central America, neurosurgeons compared to orthopedic spine surgeons, and surgeons with 5-10 years' experience had a greater classification accuracy based on univariate analysis [15]. However, on multivariate analysis only South America region remained significant, while hospital type became significant [15]. Although our study identified an increase in orthopedic spine specialists participating in the webinar, both neurosurgeons and orthopedic spine surgeons had excellent interobserver reliability and intraobserver reproducibility for fracture morphology and subtype, with substantial facet injury interobserver reliability and intraobserver reproducibility. Further, there was minimal variation in intraobserver reproducibility and interobserver reliability based on geographic region. This is consistent with literature evaluating previous AO Spine fracture classification systems in which geographic region did not account for any significant variation in the radiographic classification of thoracolumbar fractures [13]. Limitations were present during this study, which require discussion. A previous iteration of this study was attempted in 2018 with the intention to validate the AO Spine Subaxial Injury Classification System on an international scale. However, the disappointing validation outcomes resulted in methodological design alterations and subsequent revalidation of the classification system in 2020. Although discussed in a separate manuscript, the improvement in validation methodology likely accounted for the substantial to excellent reliability and reproducibility of this classification system. Unique CT videos, which were not previously circulated, were displayed during the 2020 validation. Therefore, any participant who may have had access to the 2018 validation injury films would not obtain an advantage during the 2020 validation. Additionally, due to the utilization of a live webinar to validate the subaxial cervical spine injury classification, participating members were given limited time to classify each injury. This may have led some members who process images at a slower rate, have less experience, or are not fluent in the English language to struggle with completing the validation in a timely fashion, which could have artificially suppressed the reliability and reproducibility of the classification [16][17][18][19]. However, given the substantial to excellent reliability and reproducibility of the classification system on a global level, this was likely of limited significance. While use of magnetic resonance imaging (MRI) would be helpful to better evaluate the extent of associated soft tissue injuries, AO Spine classification systems utilize CT scans to classify all injuries to minimize inequality gaps present globally that limit access to MRI in some areas [20,21]. CT scan remains the gold standard for spinal trauma work up, as they are quicker and more accessible than MRIs, with some spine surgeons reporting MRIs taking greater than 24 h to obtain [22]. Conclusion The AO Spine Subaxial Injury Classification System demonstrated excellent intraobserver reproducibility for fracture morphology and fracture subtype with substantial reproducibility for facet injury. The classification system also had substantial to excellent results when assessing interobserver reliability for fracture morphology, fracture subtype and facet injury. When assessing the reliability and reproducibility of the classification system for each fracture subtype and facet injury variation, the AO Spine Subaxial Injury Classification System demonstrated at minimum substantial reliability and/or reproducibility indicating its global applicability as a classification tool for subaxial cervical spine injuries.
2022-12-01T06:17:54.148Z
2022-11-30T00:00:00.000
{ "year": 2022, "sha1": "9c282c8f9a8d592024176fed6ed0a302d3ff52cd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00586-022-07467-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "0f2a84cb4241316d3926f080cb8db19abfd8b56f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59402731
pes2o/s2orc
v3-fos-license
Effects of various freezing and thawing techniques on pork quality in ready-to-eat meals Meat rapidly decomposes and discolours due to oxidation and enzyme activity; therefore, it must be frozen when stored. This study investigates the effects of different freezing and thawing processes on pork quality. Pork meat was frozen by natural convection freezing (NCF, -38°C), individual quick-freezing (IQF, -45°C), or liquid nitrogen freezing (LNF, -100°C). Freezing was completed when the thermocouple temperature reached -12°C. The meat was then placed in a general showcase at -24°C for 24 h. Thawing was conducted by natural convection thawing (NCT, 25°C) or running water thawing (RWT, 10°C). The cooking loss and drip loss contents of the samples did not significantly differ, whereas the thawing loss contents were higher in the NCF sample than that in the other samples. Compared to fresh meat, the L * , a * , and b * colour values decreased and the total colour difference (ΔE) was similar in the samples subjected to IQF/RWT. The pH values of all the samples except for the one subjected to NCF were significantly increased than that in fresh meat (p < 0.05). IQF/RWT Treatment resulted in the highest water-holding capacity and maintained homogenous tissue similar to fresh pork; however, the shear force value was lower than those in the other frozen/thawed samples. These results suggest that the IQF/RWT process was optimal for pork. Freezing has many advantages for the preservation of meat, but it can results in the destruction of muscle fibers due to the formation of ice crystals of various sizes according to the freezing rate (Hong et al., 2005b).This may lead to problems during thawing, such as drip loss, various WHC contents, decreases in the gel-forming potentials of muscle fiber proteins, and reductions in the space within the myofibrils (Sakata et al., 1995;Huff-Lonergan and Lonergan, 2005).Classic freezing and thawing procedures change the texture and cooking properties of food, and this is probably due to the destruction of the membrane structure and concentration changes in the solute (Londahl, 1997).The development of new methods for the freezing and thawing of foods is required in food industries (Massaux et al., 1999). Several novel freezing techniques, such as individual quick freezing (IQF) and liquid nitrogen freezing (LNF), have been developed in recent years.IQF is an improvement of classical air blasts freezing, which generally entails temperatures of -18°C or lower (Fennema et al., 1975).In IQF, small food pieces are frozen in an air blast freezer at temperatures that are lower (-30 to -50°C) than that used for traditional freezing.IQF can freeze individual or bulk samples of various food groups, such as meat, vegetables, and fruits, in less time (Jo et al., 2014).Cryogenic freezing with liquid nitrogen results in high freezing rates, even at the center of the product, and faster freezing time compared with conventional air freezing (Zhou et al., 2010).However, the cost of the cryogenic liquid is high, and this system has the disadvantage of freeze cracking, which causes critical and irreversible damage (Lovatt et al., 2004). While freezing is a simple and effective way for preserving food, the thawing of frozen food is also important in the process.During food thawing, thermal treatments can damage the chemical, physical, and microbiological properties of food (Hong et al., 2009;Boonsumrej et al., 2007).Minimum thawing times can reduce microbial decomposition, the deterioration of food product quality, and water loss from dripping or dehydration (Taher and Farid, 2001).Most meat thawing is performed within the temperature ranges of -5 to -1°C, and only a small fraction of thawing is performed within the temperature range of -24 to -5°C (Heldman, 1975).Thawing can play an important role in membrane decomposition as well as affect the sensory properties of the food (Nilsson and Ekstrand, 1995).The freeze-thaw process has negative effects on the physicochemical properties and overall quality of the food (Jeong et al., 2011).Therefore, guidelines for the conditions for the optimal processing for freezing and thawing need to be established. The freeze-thaw process may affect the quality of meat differently depending on the species.Universally, frozen storage is necessary to increase shelf life because pork meat has one of the shortest shelf lives among meat products due to fast microbial growth and lipid oxidation (Wulf et al., 1995). The objective of this study was to investigate the changes in the physicochemical properties, microstructure, and quality of pork meat that result from different freezing [natural convection freezing (NCF), IQF, and LNF] and thawing [natural convection thawing (NCT) and running water thawing (RWT)] processes. Materials and sample preparation Pork (crossbreed of Landrace × Yorkshire × Duroc, 6 month old hogs) samples (eye of round) were obtained from a commercial market (48 h postmortem; pH, 5.7-5.9).The fat and connective tissue were removed, and the pork was cut into a rectangular shape (1 × 1 × 5 cm, 90 ± 0.5 g) parallel to the muscle fiber direction.For the fresh (unfrozen) pork, parts of the samples were placed into a showcase at 4°C for 24 h.After freezing treatment, the sample was vacuum packaged in a polyethylene bag, individually.A thermocouple (k-type) was inserted into the center position of each sample in order to monitor the temperature of the samples during freezing and thawing. Freezing and thawing process NCF was performed at -38°C in a showcase, whereas IQF was conducted with the use of a -45°C air blast freezer (SEO JIN Freezer Co., Ltd., Goyang-City, Korea).For LNF, the samples were sprayed in a cryo-chamber system (150 × 30 × 50 cm [L × W × H], HyunDae FA, Korea) with four circular spray nozzles (MS TECH CO., LTD., Sungnam-City, Korea) with a spray angle of 60° and a flow rate of liquid nitrogen vapor of 9.0 L/min.The samples were cryogenically frozen (-100°C) for 2 min 30 s.The freezing was finished when the temperature of the thermocouple reached -12°C.Each freezing treatment sample was divided into two groups and vacuum packaged in a polyethylene bag that was placed in a general showcase at -24°C for 24 h.The thawing process was performed with two methods so that one group was thawed in running water (RWT) at 10°C and the natural convection thawing (NCT) treatment was kept at 25°C.Thawing was finished when the temperature center position reached 4°C.The temperature-time profiles of all of the samples were observed by connecting the thermocouple with a mobile corder (MV-100, Yokogawa Electric Corporation, Tokyo, Japan). pH measurements The pH values of the prepared samples were measured with a pH meter [S-220, Mettler-Toledo (Schweiz) GmbH, Greifensee, Switzerland].Five grams of each sample was mixed with 45 mL of distilled water and homogenized at 12,000 rpm for 1 min with a homogenizer (HP-91, SMT Co. Ltd., Japan). Thawing loss and cooking loss After the thawing treatment, the pork surface exudate was removed with a tower, and the samples were weighed.The thawing loss was determined by calculating the difference in the pre-freezing and post-thawing weights.After determining the thawing loss, the samples were bagged in polyethylene pouches and thermally treated in an 80°C water bath (DX9, Hanyoung Nux Co., Ltd., Namgu, Incheon, Korea) until the core temperature reached 75°C.Cooking loss was calculated as the difference in the weights from before cooking to after cooking.W1: weight of the sample after freezing (g) W2: weight of the sample after thawing (g) W1: weight of the sample after freezing and thawing (g) W2: weight of the sample after thermal treatment (g) Water-holding capacity (WHC) WHC was measured with modification of the method of Hong et al. (2005a).One gram of each thawed pork sample was weighed and then placed into a centrifuge tube with absorbent cotton.The samples were centrifuged with a centrifuge separator (1736R, LABOGENE, Korea) at 1,500 × g for 10 min at 4°C.After centrifuging, the pork was removed from the tube, and the weights of the centrifuge tubes were determined before and after the drying.The WHC was expressed as the percentage of moisture content in the meat. W1: weight of the sample after centrifuging (g) W2: weight of the tube, including the cotton, after centrifuging (g) W3: weight of the tube after the sample was removed and after centrifuging (g). Shear force measurement The samples from each batch were cut into cuboids (1 × 1 × 5 cm).The shear force of the pork samples was determined before and after cooking in quintuplicates with a texture analyzer (CT3, Brookfield Engineering Laboratories, Inc., Middleboro, MA, USA) that was equipped with a v-type plain probe.The texture analysis conditions were as follows: compression type, 10 kg force load cell; test speed, 2.5 mm/s; target distance, 15 mm; and trigger loads of 900 g and 650 g on uncooked and cooked samples, respectively.The test was repeated at least 16 times.The maximum peak force (kg) was used as the indicator of the texture parameter. Colour measurement The colour change of each sample was determined with a colourimeter (CR-400, Konica Minolta Inc., Tokyo, Japan) that was calibrated with a white standard plate (L * = +97.83,a * = -0.43,b * = +1.98).The CIE L * , a * , and b * values were determined as indicators of brightness (L), red to green colour (a), and yellow to blue colour (b).To measure the colour changes, four pieces of pork were arranged in the direction of their longest length.The total colour difference (ΔE) was numerically calculated by determining the colour difference between the fresh meat and the treated samples with the following equation: Light microscopy Light microscopy of fresh and frozen pork tissue was conducted on 0.2 cm-thick sections of formalin-fixed, paraffin-embedded samples stained with hematoxylin and eosin (H and E; BBC Biochemical, USA), using autostainer (Leica autostainer Xl ST5010 Autostainer XL, Leica Microsystems Ltd., Korea). Statistical analysis All of the reported values are the average of three (or more) experiments.Analysis of variance and Duncan's tests were conducted at the 95% confidence level (p ≤ 0.05) with SPSS 20.0 software (IBM Corporation, Armonk, NY, USA) in order to determine the significance of the differences in the results. Temperature-time profile Figure 1 shows the time-temperature profiles of the pork treatment during freezing and thawing.The freezing time for the core temperature of the pork to reach -12°C was 58 min in the NCF treatment.IQF treatment showed rapid freezing compared to NCF, and the freezing time was estimated as 18 min.The LNF-treated pork showed the most rapid temperature drop among the freezing treatments, and the center temperature reached -12°C in 3 min.These results were in accordance with the freezing temperature of each treatment.Based on Boonsumrej et al. (2007), cryogenic freezing was favorable as a rapid freezing technique compared to air-blast freezing or commercial freezing, and this was in agreement with the results of our study. For the thawing method, the overall thawing times of RWT and NCT were well differentiated (Figure 1b).All of the RWT treatments thawed within 10 min compared to the NCT treatments that took longer than 30 min.Although the NCT treatment was conducted at a higher temperature (25°C) than the RWT treatment (10°C), the results indicated that RWT was more advantageous than NCT for the rapid thawing.In addition, the thawing times of the samples differed according to the type of freezing that was applied.In the RWT treatments, the rapidity of the thawing occurred in the order of IQF, NCF, and LNF, and the same order was observed for the NCT treatments.Although the reason why the thawing rate was affected by freezing type was unclear in the present study, this study demonstrated that IQF that was followed by RWT was the best condition for the meat freezing and thawing processes. pH The freezing and thawing treatments affected the pH of the pork, as depicted in Figure 2. Irrespective of the thawing methods, NCF treatment resulted in pH values of 5.46 -5.47, which was not significantly different from the 5.46 pH of the fresh control.However, pork that was frozen by IQF and LNF showed a significantly higher pH than that of the control (p < 0.05), and a particular increase in pH was noticeable with LNF.For the thawing methods, the NCT treatments resulted in higher pH values compared to the RWT treatments (p < 0.05).Consequently, the highest pH (5.58) was obtained with LNF treatment that was combined with NCT (p < 0.05). Various studies have reported inconsistent relationships of muscle pH and freezing/thawing treatment.Leygonie et al. (2012) and Devine et al. (1995) have reported that frozen/thawed meat had a slightly lower pH than that of the fresh state due to the electrolyte exudate from the muscle tissue.Muela et al. ( 2010) postulated that the pH of fresh meat and frozen/thawed meat did not differ significantly. Alternatively, Kim and Lee (2011) reported that frozen/thawed meat had a higher pH than fresh control meat because of partial denaturation of the muscle proteins.Those authors also insisted that the pH of treated meat is an important indicator of the physical properties of the muscle proteins.In the present study, it was clear that the impact of the treatment conditions on the physical state of the meat proteins was remarkable.Despite the finding that the LNF treatment was for a short period (2.5 min), an extreme temperature condition would result in cold denaturation of the muscle proteins, which would thereby increase the pH of the meat.Furthermore, thawing at relatively high temperature (25°C of NCT) was not favorable to minimize the quality loss of frozen muscle comparing to that thawed at low temperature (10°C of RWT).For the protein state, the application of LNF required the optimization of the proper operating conditions, such as the processing temperature and time. Water-binding properties The water-binding properties of the frozen/thawed pork are given in Table 1.With the exception of NCF that was followed by NCT treatment, the overall thawing loss of the pork ranged from 3.75 to 4.80%, which was not significantly different among the treatments.However, the NCF/NCT treatment had the highest thawing loss (6.75%) among the treatments (p < 0.05).The highest thawing loss resulting from the NCF/NCT treatment was possibly related to the slow freezing and thawing rates.The NCF treatment involved a slow freezing rate by which the pork tissue would be more damaged than with the other freezing methods.Considering the decreased thawing loss of LNF that was followed by NCT treatment, the freezing rate appeared to influence the thawing loss of the pork rather than the thawing rate.However, it was obscure why LCF showed a small amount of thaw drip with RWT treatment.One possible explanation is that the frozen rapidly mean was not affected by the thawing methods, while rapid thawing was necessary when the meat was frozen slowly.The cooking loss of the fresh control was 15.7%, and the loss was significantly lower than the losses from the freezing/thawing treatments (p < 0.05).Among the treatments, no significant differences in cooking loss were found, and the loss ranged from 18.9% to 21.7%.There was no doubt that the tissue damage that was caused by ice crystallization and recrystallization attributed to the high cooking loss compared with fresh meat.For frozen/thawed meat, Mortensen et al. (2006) reported that a low freezing temperature and a high thawing temperature tended to result in high thawing loss and cooking loss.However, sample size is another important factor that affects the cooking loss of meat samples (Leygonie et al., 2011).In the present study, the meat was sampled in small strips that are used for home meal replacement products.The small size of the samples was compensated for by the insignificance of the cooking loss of the frozen/thawed treatments. Compared to 85.3% of fresh control, the WHC of the treatments was slightly or significantly lower than the control and ranged from 77.7 to 85.2%.Jo et al. (2014) found that the WHC of muscle freezing/thawing treatment is lower than that of fresh controls, and this is in accordance the findings of this study.Miller et al. (1980) noted that the WHC percentage decreases for pork and beef meat because of damage to the muscle tissue from being frozen.Ngapo et al. (1999) showed that damaged cell membranes cause the drip exudate from the intracellular space to the extracellular space, which results in the easy release of drips from the muscle tissue.In this study, LNF that was followed by NCT (82.4%) and IQF that was followed by RWT treatment (85.2%) did not show significant differences in WHC compared with a control, while other treatments had lower WHCs than controls (p < 0.05).With respect to food hygiene, the thawing process is supposed to be conducted at a relatively low temperature in a short time, and, thus, RWT would be better than NCT in order to minimize the water binding properties of the treated meat.Therefore, these results suggest that IQF is the best application for meat freezing. Shear force Figure 3 depicts the shear force of the cooked pork.The fresh pork had a shear force of 2.41 kg, and the frozen/ that were followed by RWT treatment had significantly higher shear forces compared with controls (p < 0.05).Shanks et al. (2002) and Lagerstedt et al. (2008) reported that meat toughness was decreased by the freezing and thawing process.The decrease in the shear force might have been due to the loss in cell membrane durability that occurred as a result of ice crystal formation and the reduction in the shearing (Lui et al., 2010).The tenderization of meat can occur as a result of the activation of enzymes, such as those involved in proteolysis, and the loss of physical structure by ice crystal formation (Leygonie et al., 2012).In these investigations, however, the shear force was measured prior to the thermal processing.Alternatively, Lagerstedt et al. (2008) reported that the shear force of frozen meat is closely related to the storage period and storage condition.In addition, the frozen/cooked meat showed a higher shear force than the fresh/cooked meat did (Kolczak et al., 2005).In this study, IQF treatment resulted in the best tenderness of the meat, and the IQF was favorable for applying it as a quick freezing technique. Colour The CIE colour parameters of the treatments were compared with those of fresh controls (Table 2).All of the freezing and thawing treatments had lower L Lind et al. (1971) obtained similar results in that the freezing rates did not have a significant effect on colour.The b * value of the IQF/RWT sample had the most similar values as the fresh pork samples among the frozen/ thawed samples.The total colour difference tended to be lower than those of the other samples, but the difference was not significant.The results can be explained by the IQF/RWT value being similar to those of fresh pork meat. The hue angle of the meat product has been used to indicate the colour stability of fresh and processed meats (Brewer and Harbers, 1992).Leygonie et al. (2012) reported that the CIE L*, a*, and Chroma values in the visual test were significantly decreased in the frozen/ thawed samples.These results suggest that the meat product should exhibit an overall browner and more somber appearance because of rapid oxidation of the myoglobin after freezing/thawing. Light microscopy The light microscopy images of the pork samples are shown in Figure 4.For the raw pork (control), transverse sections of the myofibrils have most uniform shape and the myofibrils were maintained their integrity.However, segmental muscle and segmental coagulation necrosis in longitudinal section were observed after NCF and NCF treatments.This result could be explained by Jo et al. (2014) in which this phenomenon may result from ice crystal formation and recrystallization.Alternately, IQF maintained the condition of myofibrils, although their density was not as intense as that of the raw meat.Furthermore, the condition of myofibrils did not change after different thawing treatments.Based on Mortensen et al. (2006), cell structure of frozen muscle tissue was closely depending on freezing rate.Rapidly frozen muscle tissue showed broadly intact structure with partial damages, whereas the tissue frozen slowly showed completely damaged cell structure.Therefore, our results could demonstrate that tissue damage during freezing and thawing is inevitable, and confirm that tissue damage was more influenced by the freezing methods than thawing methods. Conclusion This study compared the effects of different freezing and thawing treatments on the quality of pork.The factors of temperature, time, and the rates of freezing and thawing influenced the changes in the meat quality attributes, such as colour, thawing loss, WHC, and shear force.The results of this study suggested that IQF/RWT treatment is an effective process by which the meat quality is maintained. Figure 2 . Figure 2. Effects of the freezing and thawing treatments on the pH of pork.The pH values of the control (fresh pork) and after the freezing and thawing treatments of natural convection freezing (NCF), individual quick-freezing (IQF), liquid nitrogen freezing (LNF), natural convection thawing (NCT), and running water thawing (RWT).Each value is expressed as the mean ± standard deviation of multiple measurements (n = 5).a-d Means with different superscript letters are significantly different (p < 0.05). Figure 3 . Figure 3. Effects of the freezing and thawing treatments on the shear force of cooked pork.The control (fresh pork) and experimental samples were subjected to freezing and thawing with natural convection freezing (NCF), individual quick-freezing (IQF), liquid nitrogen freezing (LNF), natural convection thawing (NCT), and running water thawing (RWT).Each value is expressed as the mean ± standard deviation of multiple measurements (n = 5).a-d Means with different superscript letters differ significantly (p < 0.05). those of the control, with the exception of b * in which the NCF/NCT samples showed a significantly lower b* compared with the control (p < 0.05). Figure 4 . Figure 4. Histological appearance of the frozen and thawed pork samples following the different freezing and thawing processes.(A) Control (raw pork), (B and E) natural convection freezing, (C and F) individual quick-freezing, (D and G) cryogenic freezing C and D), natural convection thawing, and (E, F and G) running water thawing. A B Table 1 . Effects of the freezing and thawing treatments on the water binding properties. Table 2 . Effects of the freezing and thawing treatments on the CIE colour of pork.
2018-12-28T01:47:40.045Z
2015-11-30T00:00:00.000
{ "year": 2015, "sha1": "47ad9f346e49c6bcf94d2acd18548794f3853156", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJFS/article-full-text-pdf/0DE7A8D55979.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "47ad9f346e49c6bcf94d2acd18548794f3853156", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "History" ] }
231980755
pes2o/s2orc
v3-fos-license
Lessons From the Aftermaths of Green Revolution on Food System and Health Food production has seen various advancements globally in developing countries, such as India. One such advancement was the green revolution. Notably, the World Bank applauds the introduction of the green revolution as it reduced the rural poverty in India for a certain time. Despite the success of the green revolution, the World Bank reported that health outcomes have not been improved. During the post-green revolution period, several notable negative impacts arose. Exclusive studies were not conducted on the benefits and harms before the introduction of the green revolution. Some of such interventions deviate from the natural laws of balance and functioning and are unsustainable practices. To avoid the adverse effects of some of these developments, a review of these interventions is necessary. INTRODUCTION The production of food within India was insufficient in the years from 1947 to 1960 as there was a growing population, during which a famine was also anticipated (Nelson et al., 2019). Food availability was only 417 g per day per person (Ghosh, 2002). Many farmers were in debt, and they had become landless laborers. Political situations that prevailed also had a negative impact on the food system. There was a severe shortage of food crops as well as commercial crops. At the same time, Norman Borlaug, an agronomist, contributed to the green revolution significantly, and this had set out its effects throughout the world. He provided new seeds for cultivation, which were stocky, disease-resistant, fast-growing, and highly responsive to fertilizers. In India, the green revolution was launched under the guidance of geneticist Dr. M. S. Swaminathan (Somvanshi et al., 2020). It started around 1960s and helped in increasing food production in the country. The green revolution's primary aim was to introduce high-yielding varieties (HYVs) of cereals to alleviate poverty and malnutrition (Nelson et al., 2019). Not to deny, the green revolution was capable of mitigating hunger and malnutrition in the short term as well (Davis et al., 2019). What Is the Green Revolution? The green revolution led to high productivity of crops through adapted measures, such as (1) increased area under farming, (2) double-cropping, which includes planting two crops rather than one, annually, (3) adoption of HYV of seeds, (4) highly increased use of inorganic fertilizers and pesticides, (5) improved irrigation facilities, and (6) improved farm implements and crop protection measures (Singh, 2000;Brainerd and Menon, 2014) and modifications in farm equipment. There was a high investment in crop research, infrastructure, market development, and appropriate policy support (Pingali, 2012). Efforts were made to improve the genetic component of traditional crops. This included selection for higher yield potential; wide adaptation to diverse environments; short growth duration; superior grain quality; resistance to biotic stress, insects, and pests; and resistance to abiotic stress, including drought and flooding (Khush, 2001). After the green revolution, the production of cereal crops tripled with only a 30% increase in the land area cultivated. This came true all over the world, with a few exceptions. In addition, there were significant impacts on poverty reduction and lower food prices. Studies also showed that without the green revolution, caloric availability would have declined by around 11-13%. These efforts benefitted all consumers in the world, particularly the poor. There were further improved returns to the crop improvement research. It also prevented the conversion of thousands of hectares of land for agriculture (Pingali, 2012). The green revolution helped India move from a state of importing grains to a state of self-sufficiency (Brainerd and Menon, 2014). Earlier, it was the ship-to-mouth system, i.e., India depended on imported food items (Ramachandran and Kalaivani, 2018). There are undoubtedly positive effects on the overall food security in India. Correspondingly, useful and elaborate evidence in support of the positive impact of the green revolution is available. However, after a certain period, some unintended but adverse effects of the green revolution were noticed. This paper introspects the negative impacts of the green revolution on the food system in India. Studies by the departments of conventional agriculture, social sector development, etc. bring out the positive impacts of the green revolution, such as increased yield and reduced mortality and malnutrition (Somvanshi et al., 2020;von der Goltz et al., 2020). On the other hand, studies conducted by the environmental and public health departments suggest that to mitigate the negative impacts, a reduced usage of pesticides is sufficient (Gerage et al., 2017). There are many studies being conducted to find out the extent of the impacts of pesticides and insecticides and other similar chemicals. Although there are many studies that focused on this topic, this paper makes an effort to inform policy by asserting that many interventions, beneficial for the shorter term, such as the green revolution, without the consideration of ecological principles, can be detrimental and irreversible in the long run (Clasen et al., 2019). Efforts to recover from environmental damage would require extensive efforts, time, and other resources as compared with the destruction of the environment. Hence, any new intervention needs to be checked for its eco-friendliness and sustainability features. Carrying forward intensified usage of pesticides is not advisable in an ever-deteriorating environment, and alternative solutions that can promote economic growth, increased yield, and less harm to the environment can be implemented. The vicious cycle of problem-solution-negative impacts has to be broken at some point of time. For example, a second green revolution is focused on in various countries (Ameen and Raza, 2017;Armanda et al., 2019). Instead of this, techniques to promote sustainable agriculture can be considered. Hence, there has to be a wake-up call before the repetition of history. Impacts on Agriculture and Environment Pests and Pesticide There has been a significant increase in the usage of pesticides, and India became one of the largest producers of pesticides in the whole of Asia (Narayanan et al., 2016). Although this has contributed to a lot of economic gains (Gollin et al., 2018), it is found out that a significant amount of pesticides is unnecessary in both industrialized and developing countries. For instance, it is reported that the presence of pesticides within freshwater is a costly concern with detected levels exceeding the set limits of pesticide presence (Choudhary et al., 2018). Although the average amount of pesticide usage is far lower than in many other countries, there is high pesticide residue in India. This causes a large amount of water pollution and damage to the soil. Another major issue is the pest attack, which arises due to an imbalance in the pests. Due to increased pesticide usage, the predator and prey pests are not in balance, and hence there is an overpopulation of one kind of pest that would attack certain crops. This leads to an imbalance in the production of those kinds of crops. These crops would need stronger pesticides or pesticides of new kinds to tackle the pests attacking those. This also has led to the disruption in the food chain (Narayanan et al., 2016). Water Consumption India has the highest demand for freshwater usage globally, and 91% of water is used in the agricultural sector now (Kayatz et al., 2019). Currently, many parts of India are experiencing water stress due to irrigated agriculture (Davis et al., 2018). The crops introduced during the green revolution were water-intensive crops. Most of these crops are cereals, and almost 50% of dietary water footprint is constituted by cereals in India (Kayatz et al., 2019). Since the crop cycle is less, the net water consumed by these crops is also really high. The production of rice currently needs flooding of water for its growth 1 (International Rice Research Institute). Canal systems were introduced, and there were irrigation pumps that sucked out water from the groundwater table to supply the water-intensive crops, such as sugarcane and rice (Taylor, 2019). Punjab is a major wheat-and rice-cultivating area, and hence it is one of the highest water depleted regions in India 2 (Alisjahbana, 2020). It is predicted that Punjab will have water scarcity in a few years (Kumar et al., 2018). Diminishing water resources and soil toxicity increased the pollution of underground water. The only aim of the green revolution was to increase food items' production and make it sufficient to feed everyone. The environmental impacts were not taken into account (Taylor, 2019). Based on the previous allocation of budget, irrigation was allotted 9,828 crore INR as compared with 3,080 crore INR for agriculture, excluding irrigation. This pattern has been persistent in the past 3 years (NABARD, 2020). Overall, the GDP from agriculture is 380,239 crore INR (16.5% of GDP) (Economics, 2020;India, 2020). This indicates that there has been a higher investment on irrigation of water due to its increased need in comparison with the other inputs required for agriculture. Air Pollution Air pollution introduced due to the burning of agricultural waste is a big issue these days. In the heartland of the green revolution, Punjab, farmers are burning their land for sowing the crops for the next cycle instead of the traditionally practiced natural cycle. The next crop cycle arrives very soon because the crop cycle is of short duration for the hybrid crops introduced in the green revolution. This contributes to the high amount of pollution due to the burning of agricultural waste in parts of Punjab (Davis et al., 2018). This kind of cultivation can lead to the release of many greenhouse gases, such as carbon dioxide, methane, nitrogen oxides, etc. (de Miranda et al., 2015). Impacts on Soil and Crop Production There was a repetition of the crop cycle for increased crop production and reduced crop failure, which depleted the soil's nutrients (Srivastava et al., 2020). Similarly, as there is no return of crop residues and organic matter to the soil, intensive cropping systems resulted in the loss of soil organic matter (Singh and Benbi, 2016). To meet the needs of new kinds of seeds, farmers used increasing fertilizers as and when the soil quality deteriorated (Chhabra, 2020). The application of pesticides and fertilizers led to an increase in the level of heavy metals, especially Cd (cadmium), Pb (lead), and As (arsenic), in the soil. Weedicides and herbicides also harm the environment. The soil pH increased after the green revolution due to the usage of these alkaline chemicals (Sharma and Singhvi, 2017). The practice of monoculture (only wheat-rice cultivation) has a deleterious effect on many soil properties, which includes migration of silt from the surface to subsurface layers and a decrease in organic carbon content (Singh and Benbi, 2016). Toxic chemicals in the soil destroyed beneficial pathogens, which are essential for maintaining soil fertility. There is a decrease in the yield due to a decline in the fertility of the soil. In addition, the usage of tractors and mechanization damaged the physicochemical properties of the soil, which affected the biological activities in the soil. In the traditional methods, soil recovers in the presence of any kind of stressors (Srivastava et al., 2020). However, this does not happen with these modern methods. In a study conducted in Haryana, soil was found to have waterlogging, salinity, soil erosion, decline, and rise of groundwater table linked to brackish water and alkalinity, affecting production and food security in the future (Singh, 2000). Although for around 30 years there was an increase in the production of crops, the rice yield became stagnant and further dropped to 1.13% in the period from 1995 to 1996 (Jain, 2018). Similarly with wheat, production declined from the 1950s due to the decrease in its genetic potential and monoculture cropping pattern (Handral et al., 2017). The productivity of potato, cotton, and sugarcane also became stagnant (Singh, 2008). Globally, agriculture is on an unsustainable track and has a high ecological footprint now (Prasad, 2016). Extinction of Indigenous Varieties of Crops Due to the green revolution, India lost almost 1 lakh varieties of indigenous rice (Prasad, 2016). Since the time of the green revolution, there was reduced cultivation of indigenous varieties of rice, millets, lentils, etc. In turn, there was increased harvest of hybrid crops, which would grow faster (Taylor, 2019). This is indicated in Figure 1. There is a large increase in the cultivation of wheat, soybeans, and rice. In addition, there is a large decrease in the cultivation of sorghum, other millets, barley, and groundnuts. The increase in certain crops was due to the availability of HYVs of seeds and an increase in the area of production of these crops (Singh, 2019). The preference of farmers also changed in terms of the cultivation of crops. The native pulses, such as moong, gram, tur, etc., and some other oilseed crops, such as mustard, sesame, etc., were not cultivated further on a larger scale than it was before. Traditionally grown and consumed crops, such as millets, grow easily in arid and semi-arid conditions because they have low water requirements. However, there was the unavailability of high-yielding seeds of millets, and hence farmers moved to only rice and wheat (Srivastava et al., 2020). Impacts on Human Health Food Consumption Pattern Traditionally, Indians consumed a lot of millets, but this became mostly fodder after the green revolution (Nelson et al., 2019). The Cambridge world history of food mentions that the Asian diet had food items, such as millets and barley (Kiple and Ornelas, 2000). As already mentioned, after the period of the green revolution, there were significant changes in food production, which in turn affected the consumption practices of Indians. The Food and Agriculture Organization (FAO) has recorded that over the years 1961-2017, there are a decrease in the production of millets and an increase in the production of rice (Food and Agricultural Organisation, 2019;Smith et al., 2019); thus, rice became the staple diet of the country. Though the green revolution made food available to many, it failed to provide a diverse diet but provided increased calorie consumption. Health-Related Impacts on the General Population Most of the pesticides used belong to the class organophosphate, organochlorine, carbamate, and pyrethroid. Indiscriminate pesticide usage has led to several health effects in human beings in the nervous, endocrine, reproductive, and immune systems. Sometimes, the amount of pesticide in the human body increases beyond the capacity of the detoxification system due to continuous exposure through various sources (Xavier et al., 2004). Of all, the intake of food items with pesticide content is found to have high exposure, i.e., 10 3 -10 5 times higher than that arising from contaminated drinking water or air (Sharma and Singhvi, 2017). Impacts on Farmers Most of the farmers who use pesticides do not use personal protective gear, such as safety masks, gloves, etc., as there is no awareness about the deleterious effects of pesticides. Pesticides, applied over the plants, can directly enter the human body, and the concentration of nitrate in the blood can immobilize hemoglobin in the blood. Organophosphates can also develop cancer if exposed for a longer period. Since it is in small quantities, the content may not be seen or tasted; however, continuous use for several years will cause deposition in the body. Dichlorodiphenyltrichloroethane (DDT) was a very common pesticide used in India, now banned internationally as it is found to bioaccumulate and cause severe harmful effects on human beings (Sharma and Singhvi, 2017). However, there is still illegal use of DDT in India. In India, women are at the forefront of around 50% of the agricultural force. Hence, most of these women are directly exposed to these toxins at a young age and are highly vulnerable to the negative impacts including effects on their children. It is proven that there is a significant correlation between agrochemical content in water and total birth defects. The damaging impact of agrochemicals in water is more pronounced in poor countries, such as India (Brainerd and Menon, 2014). DISCUSSION Efforts are underway to produce genetic variants of millets that can withstand biotic and abiotic stresses. Earlier, the introduction of genetic variants of rice and wheat and pesticides was the solution for malnutrition, but it led to environmental destruction in a few years. In the short term, food scarcity might rise again due to increased water depletion and soil damage. Any new interventions should be carefully introduced not to disrupt other systems to prevent future adversities. A domino effect is expected to occur when there is any disruption in the ecosystem, such that if even one link in the food chain is affected, it affects other parts of the chain also. Most of the ecological disruption is by human intervention (Vaz et al., 2005). Pesticides used for agricultural activities are released to the environment through air drift, leaching, and run-off and are found in soil, surface, and groundwater. This can contaminate soil, water, and other vegetation. Pesticide residues are found to be present in almost all habitats and are detected in both marine and terrestrial animals (Choudhary et al., 2018). The mechanisms include absorption through the gills or teguments, which is bioconcentration, as well as through the consumption of contaminated food, called biomagnification or bioamplification. In marine systems, seagrass beds and coral reefs were found to have very high concentrations of persistent organic pollutants (Dromard et al., 2018). It also affects the activities of insects and microbes. It kills insects and weeds, is toxic to other organisms, such as birds and fish, and contaminates meat products, such as chicken, goat, and beef. This can lead to bioaccumulation in human beings along with poor food safety, thus impairing nutrition and health. Repeated application leads to loss of biodiversity (Choudhary et al., 2018). Consumption of pesticide-laden food can lead to loss of appetite, vomiting, weakness, abdominal cramps, etc. (Gerage et al., 2017). There is a decline in the number of pollinators, for instance, the destruction of bumblebee colonies that are an important group of pollinators on a global scale (Baron et al., 2017). There is an extinction of honeybee populations, and it poses a great threat to the survival of human beings (Hagopian, 2017). The residue level of these pesticides depends on the organism's habitat and position in the food chain. This is a serious issue because the predicted usage of pesticides is that it will be doubled in the coming years (Choudhary et al., 2018). In addition, it is not nearly possible to get back the lost varieties of indigenous rice. Likewise, further advancements should not lead to the extinction of the other indigenous varieties of grains, such as millets. In conclusion, the effects of the green revolution are persisting. The green revolution, which was beneficial in ensuring food security, has unintended but harmful consequences on agriculture and human health. This requires new interventions to be tested and piloted before implementation, and continuous evaluation of the harms and benefits should guide the implementation. An already fragile food system is affected due to the aftermaths of the green revolution. The potential negative impacts are not part of the discourse as it can affect the narratives of development and prosperity. Developments introduced due to necessity may not be sustainable in the future. Organic ways of farming need to be adopted for sustainable agricultural practices. Similarly, alternative agriculture techniques, such as intercropping, Zero Budget Natural Farming (ZBNF) with essential principles involving the enhancement of nature's processes, and elimination of external inputs, can be practiced (Khadse et al., 2018). The government of Andhra Pradesh (AP), a Southern state in India, has plans to convert 6 million farmers and 8 million hectares of land under the state initiative of Climate Resilient Zero Budget Natural Farming because of the positive outputs obtained in the ZBNF impact assessments in the states of Karnataka and AP (Reddy et al., 2019;Koner and Laha, 2020) In AP, it was observed that yield of crops increased to 9% in the case of paddy and 40% in the case of ragi. Net income increased from 25% in the case of ragi ranging to 135% in the case of groundnut (Martin-Guay et al., 2018;Reddy et al., 2019). There is a need for a systems approach in dealing with food insecurity and malnutrition and other similar issues. Like the already mentioned example, the green revolution was brought in to reduce the problem of reduced yield. Now, there is a green revolution 2 that is planned. Before such interventions are taken, environmental risk assessments and other evaluation studies should be conducted for a sustainable future. AUTHOR CONTRIBUTIONS DJ conceived the idea. DJ and GB contributed to the writing of the article. Both the authors contributed to the review, proofreading, and finalizing the manuscript. FUNDING This MAASTHI cohort was funded by an Intermediate Fellowship by the Wellcome Trust DBT India Alliance (Clinical and Public Health research fellowship) to GB (grant number IA/CPHI/14/1/501499). The funding agency had no role in the design and conduct of the article, review and interpretation of the data, preparation or approval of the manuscript, or decision to submit the manuscript for publication.
2021-02-22T14:22:39.250Z
2021-02-22T00:00:00.000
{ "year": 2021, "sha1": "5473c33103a381a77eccaece81752a5f471854d4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fsufs.2021.644559/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5473c33103a381a77eccaece81752a5f471854d4", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257812014
pes2o/s2orc
v3-fos-license
Lavandula x intermedia—A Bastard Lavender or a Plant of Many Values? Part II. Biological Activities and Applications of Lavandin This review article is the second in a series aimed at providing an in-depth overview of Lavandula x intermedia (lavandin). In part I, the biology and chemistry of lavandin were addressed. In part II, the focus is on the functional properties of lavandin and its applications in industry and daily life. While reviewing the biological properties, only original research articles employing lavandin were considered. Lavandin essential oil has been found to have antioxidant and biocidal activity (antimicrobial, nematicidal, antiprotozoal, insecticidal, and allelopathic), as well as other potential therapeutic effects such as anxiolytic, neuroprotective, improving sleep quality, antithrombotic, anti-inflammatory, and analgesic. Other lavandin preparations have been investigated to a much lesser extent. The research is either limited or inconsistent across all studies, and further evidence is needed to support these properties. Unlike its parent species—Lavandula angustifolia (LA)—lavandin essential oil is not officially recognized as a medicinal raw material in European Pharmacopeia. However, whenever compared to LA in shared studies, it has shown similar effects (or even more pronounced in the case of biocidal activities). This suggests that lavandin has similar potential for use in medicine. Introduction Lavandula x intermedia Emeric ex Loisel (LI), also known as lavandin, Dutch lavender, or bastard lavender, is a widely cultivated aromatic plant belonging to the family Lamiaceae Lindl. It is a hybrid of true lavender-Lavandula angustifolia (LA)-and spike lavender-Lavandula latifolia (LL). While it shares many similarities with its parent species, lavandin possesses unique characteristics that set it apart. This review is a continuation of an article entitled "Lavandula x intermedia-A Bastard Lavender or A Plant of Many Values? Part I. Biology and Chemical Composition of Lavandin" [1]. Part I covered the biological and chemical characteristics of L. x intermedia, including taxonomy, geographical range, morphological features, popular cultivars, cultivation, and essential oil production. Additionally, the chemical composition of its essential oil and hydrolate was thoroughly discussed and compared to the parent species, taking into account the current industry standards such as ISO, European Pharmacopeia (Ph. Eur.), and WHO monographs. We stated that lavandin essential oil (further referred to as lavandin oil) has a similar chemical composition to LA, but with a higher concentration of terpenes that give it a camphor scent, making it less appealing for use in the perfume industry. However, LI has some benefits, such as a higher yield of essential oil and lower production cost, making it a favored lavender crop for farming. Nonetheless, despite its commercial success and widespread cultivation, there is a shortage of scientific research on the subject. The scientific community tends to focus on LA, a raw material recognized by European Pharmacopeia. Furthermore, lavandin is often seen as an inferior-bastard lavender plant than true lavender. This assertion, however, aside from its use in the perfume industry, is not supported by reliable arguments. This raises the question of whether lavandin and lavandinrelated products are not as valuable as those of true lavender in other non-perfumery applications. In Part II, we discuss all reported biological effects of L. x intermedia essential oil and its other extracts in an attempt to answer this question. Furthermore, we reviewed the current applications of lavandin in industries and everyday life. As we embarked on this review, we aimed to thoroughly examine all available original scientific research articles on LI and explore its potential as an alternative to LA in various applications. There is a need for a review article dedicated to lavandin, as no such article has been published thus far, let alone one that explores its biological activities. Through this review, we hope to contribute to the understanding of this plant and lead to a greater appreciation of its importance in the scientific community and, consequently, its inclusion in more scientific studies alongside Lavandula angustifolia. Biological Activities of Lavandin The most obvious and apparent biological property of lavandin is its smell. It is caused by the volatile chemicals, mainly oxygenated monoterpenes, secreted and stored in the aerial parts of the plant [2][3][4][5]. Most of the applications of LI in industries and daily life result from this significant feature of this plant. Apart from its smell, lavandin, like many other aromatic herbs, is associated with numerous biological effects. This section aims to review the current state of knowledge on the biological activities of L. x intermedia, either its essential oil or any other kind of extract. We have reviewed all research articles we could find on the subject, including all Scopus hits for the phrases such as "Lavandula and intermedia" and "Lavandula and hybrida". We did not consider or cite any articles that made generalizations about the biological effect of lavandin based on studies made on the other taxa within the Lavandula genus. This practice appears to be common in the scientific literature. Biocidal Activities A detailed examination of the scientific literature regarding the biological activities of LI allows for a conclusion that most research conducted in this field is related to the biocidal properties of lavandin, specifically its essential oil, with hydrolates and other plant extracts being studied sporadically. Tables 1 and 2 summarize all of the original literature in this respect. Antimicrobial Many essential oils (EOs) exhibit antimicrobial properties. They have been used for centuries in traditional medicine and for embalming a corpse. Even though multiple EOs have demonstrated antimicrobial action, only some possess the potential to be used as antimicrobial agents. The real-world effect is usually significantly weaker compared to antibiotics and other synthetic compounds [6]. L. angustifolia has been proven to be effective against many bacteria, fungi, and some viruses [6][7][8][9]. There is also multiple evidence for the antibacterial and antifungal action of lavandin oil, but according to our best knowledge-there is not any research investigating its antiviral effect. Antimicrobial studies of lavandin oil, like other essential oils, are usually conducted in vitro with the use of agar diffusion (disc or well) methods and/or dilution methods. The diffusion methods, especially the disc diffusion method, are mainly used for antimicrobial susceptibility testing. Dilution methods are the most suitable for the determination of minimum inhibitory concentration (MIC), minimal lethal concentration (MLC), minimum bactericidal concentration (MBC), and minimum fungicidal concentration (MFC) values due to the fact they enable the calculation of the concentration of the tested antimicrobial chemical in the broth or agar media [7,10,11]. The review of all antimicrobial activity studies of lavandin preparations (mostly essential oils) for both bacteria and fungi reported in the literature is presented in Table 1. The antibacterial and antifungal effect of lavandin EOs against many gram-positive and negative bacteria was demonstrated by multiple researchers (cited in Table 1). Different lavandin cultivars were tested. For example, Garzoli and coworkers tested EO of the very popular cultivar Grosso grown in Italy against Escherichia coli, Acinetobacter bohemicus, Pseudomonas fluorescens, Bacillus cereus, and Kocuria marina and found bactericidal effect on Gram-negative bacteria and a bacteriostatic effect on Gram-positive bacteria both for the liquid and vapor phases [12]. According to the various tests that authors conducted, A. bohemicus was the most vulnerable strain to lavandin essential oil. It exhibited an inhibition zone of 47 mm (greater than of positive control gentamicin) and had MIC of 0.47% in the broth microdilution test. P. fluorescens was the most resistant among all the strains tested. It had an inhibition zone of just 8.5 mm and MIC of 3.75%. Bajalan et al. found the high antibacterial activity of Iranian lavandin oil from leaves against G− E. coli and G+ Streptococcus agalactiae and moderate against G− K. pneumoniae and S. aureus [13]. The antibacterial effect in vitro and in vivo in mice against Citrobacter rodentium (G−) was also indicated by Baker et al. [14]. When L. x intermedia and L. angustifolia are considered in one study, usually lavandin oil possesses similar or stronger antibacterial and antifungal effects than true lavender oil. Jianu et al. investigated Romanian LI and LA essential oil against Enterococcus faecium, Shigella flexneri, Salmonella typhimurium, Escherichia coli, and Streptococcus pyogenes. The studied oils presented significant bactericidal effects against S. flexneri, S. aureus, and E. coli but not against S. pyogenes. In most cases, L. x intermedia antibacterial activity was higher [15]. Stronger action of LI was also observed by Tardugno and coworkers, who tested EOs of different cultivars of LI and LA (Italian origin) against Listeria monocytogenes [16]. Di Vito et al. indicated similar antibacterial and antifungal properties of both lavender oils with a slightly higher effect for L. intermedia [17]. On the other hand, Robu et al. tested Romanian LI and LA essential oils against S. aureus, S. pyogenes, P. aeruginosa, E. coli, and Candida albicans, and they noticed that L. angustifolia essential oil was more active on certain bacterial strains, but L. x intermedia EO was more effective against Candida [18]. Antifungal properties of LI EO in high doses against Candida albicans were noticed by Karakaş and Bekler [19]. Moon et al. have also observed the antifungal activity of oils of lavandin and other species of lavender they studied. The EOs of three different cultivars of LI and LA oils were effective against Aspergillus nidulans and Trichophyton mentagrophytes [20]. Lavandin oil was also proved by Larrán et al. to be fungistatic against some strains of studied Ascosphaera apis-the fungus causing the chalkbrood disease of bees [21]. However, Erland and coworkers tested LI 'Provence' and 'Grosso' and LA oils and observed no significant antifungal effect against three agricultural pathogens, except some activity of LI 'Provence' oil against B. cinerea [22]. When comparing the antimicrobial activity of lavandin or true lavender oil with the oils of other aromatic plants, it has been found that some plants are far more effective, usually due to their high content of phenolic compounds, which are characterized by strong antimicrobial properties. Tardugno et al. conducted in vitro screening to assess the antimicrobial activity of 14 essential oils against oral pathogenic bacteria. It was indicated that lavandin oils showed moderate activity among all tested oils, with MIC ranging from 2-512 µL/mL. The most effective oils were those derived from Thymus vulgaris and Rosmarinus officinalis, which had MICs of 4-16 and 1-32, respectively [23]. Rota and coworkers studied the antimicrobial activity against selected foodborne pathogenic bacteria. Once again, lavandin oil showed intermediate antibacterial activity among the tested samples. As expected, the biggest effects were observed for T. vulgaris and Satureja montana oils, whereas the weakest effects were noticed for Salvia sclarea and Hyssopus officinalis [24]. The above-mentioned Di Vito et al. also studied other than lavender EOs and found that both lavandin and true lavender oils exhibited weaker activity against tested microorganisms (bacteria, drug-resistant yeasts, and fungal dermatophytes) than oils containing a lot of thymol and/or carvacrol, such as those from Origanum hirthum, S. montana, Monarda didyma, and Monarda fistulosa. The same authors also demonstrated that essential oils work much stronger than hydrolates, which exhibited mostly high MIC and MLC values (above 50%), whereas lavender essential oils had MIC and MLC values mostly above 2% [17]. No antibacterial activity of Lavandula spp. hydrolates was observed by Moon et al. The authors also evaluated aqueous and ethanolic extracts and found that water extract had no activity, while some ethanolic extracts were effective against Proteus vulgaris [20]. Ramić and colleagues tested lavandin essential oil and ethanolic extracts and observed strong antibacterial activity against one of the major food-borne pathogens-Campylobacter jejuni, with EOs exhibiting the strongest effect and MIC of 0.25 mg/mL, whereas ethanolic extracts had MIC of 0.5-1 mg/mL) [25]. The antimicrobial activity of lavandin ethanolic extracts of the same 'Budrovka' cultivar was also confirmed by other researchers-Blazenkovic et al., who found that ethanolic extracts, especially those from flowers, exhibited antimicrobial activity against a broad spectrum of bacteria, yeasts, molds, and dermatophytes. The antimicrobial activity of the extracts decreased in the order of plant part: flowers > leaves > inflorescence stalks [26]. Antibacterial effect Citrobacter rodentium-a bacteria used to model infections by the human-specific enteric bacterial pathogens L. x intermedia 'Okanagan' and wild-type EOs Antimicrobial activity against the pathogen was observed in vitro and in vivo in mice. 'Okanagan' EO (OEO, a cultivar rich in 1,8-cineole and borneol) exhibited more potent antibacterial activity than EO from wild-type lavandin. OEO inhibited systemic infection of C. rodentium in mice and modulated the enteric microbiota. Firmicutes enteric bacteria, segmented filamentous bacteria, Clostridia spp., and Eubacterium rectale were significantly increased in the ceca, while several other microbes such as Bacillus spp., Lactobacillus spp., and the Clostridium coccoides group, remained the same. Both EOs inhibited adherence (through the action of 1,8-cineole and borneol) and growth of C. rodentium in vitro. In disk diffusion assays, 20 µL OEO showed a 24 mm inhibition zone, and wild-type EO 22 mm (with a statistically insignificant difference). 1,8-cineole was the oil constituent with antimicrobial activity, while no camphor and borneol activity was found. However, in isolation, this monoterpenoid was not as effective as in oils suggesting that a combination of constituents in EO synergistically produces the greatest antimicrobial activity. [14] Enterococcus faecium, Shigella flexneri, Salmonella typhimurium, Escherichia coli, Streptococcus pyogenes L. x intermedia and L. angustifolia EOs, grown in western Romania Studied lavender and lavandin EOs presented significant bactericidal effects against S. flexneri (inhibition zones for 20 µL EO of LA-20 mm, LI-26 mm), S. aureus (LA-20 mm, LI-20 mm), and E. coli (LA-20 mm, LI-21 mm), but not against S. pyogenes in disc diffusion tests. In most cases and doses, L. x intermedia activity was higher. However, the studied EOs, although distilled from flowering shoots, had an unusual composition, with almost none of the linalool and its acetate. [15] In disc diffusion tests, LI essential oils showed significant antibacterial activity against E. coli and S. agalactiae, with inhibition zones of 15-23 mm and 12-17 mm, respectively, when a 20 µL amount was used. Slightly weaker effects were observed against K. pneumoniae (9-16 mm) and S. aureus (9-15 mm). The studied EOs were obtained from leaves and had a different composition compared to the standard EO distilled from flowering tops. In particular, it had almost no linalool and its acetate and consisted mostly of 1,8-cineole, borneol, and camphor. The authors observed that antibacterial activity is significantly correlated with the presence of 1,8-cineole. [13] E. coli, S. aureus, B. cereus L. x intermedia EO 'Super' The study evaluated the antimicrobial effects of both free and encapsulated lavandin essential oil against three pathogenic bacteria. Before analyzing the formulations with encapsulated essential oil, the antibacterial activity of pure lavandin essential oil was tested in vitro, and MICs were 7.1 mg/mL for both E. coli and S. aureus and 3.6 mg/mL for B. cereus. Lavandin oil's antibacterial activity could be enhanced by encapsulation due to the protection and controlled release of the essential oil. Soybean lecithin was found to be an efficient carrier material for LI essential oil, with better results than other carriers, and it was suggested that liposomes formed could cross both phospholipid layers of Gram-negative bacteria and deliver the essential oil inside the cell of bacteria. Antibacterial effect Campylobacter jejuni biofilms on abiotic surfaces L. x intermedia 'Bila', 'Budrovka SN', 'Budrovka'; EOs and ethanolic extracts, grown in Croatia All studied EOs possessed the best antibacterial activity with a minimal inhibitory concentration of 0.25 mg/mL. A weaker effect was observed for ethanolic extracts of flowers before distillation and post-distillation with MIC of 0.5-1 mg/mL. Lavandin ethanolic extracts of flowers prior to distillation were found to be more efficient in decreasing intercellular signaling and adhesion of C. jejuni as compared to lavandin EOs and ethanolic extracts of post-distillation waste material. However, lavandin EOs exhibited a slightly stronger impact on inhibiting the formation of biofilm. The authors concluded that lavandin formulations can be used as antimicrobial agents to control the development of C. jejuni biofilm. [25] Lactobacillus spp., S. mutans (oral pathogenic bacteria) L. x intermedia EO 'Grosso', 'Sumian'; EOs of other plants additionally tested Lavandin EOs exhibited an antibacterial effect. A microwell dilution assay revealed MIC for EO of 'Grosso' as 16 µL/mL for two strains of Lactobacillus spp. and 16 and 256 µL/mL for two strains of S. mutans. 'Sumian' EO had MICs of 32 and 2 µL/mL for Lactobacillus strains and 32 and 512 µL/mL for S. mutans. Lavandin oils had a weaker effect than Thymus and Rosmarinus EOs and their MICs were 4-16 and 1-32 µL/mL, respectively. The authors found that the antibacterial power was positively correlated with the content of menthol, thymol, and carvacrol in the essential oil. The effects of LI oils were not significant enough to include them in the further stages of the experiment, which involved screening combinations of essential oils and chlorhexidine. The lavender hydrosols and aqueous foliage extracts did not possess antibacterial activity in the disc diffusion assay (10 µL of agent). Some ethanolic extracts displayed activity against P. vulgaris (inhibition zones of 7-9 mm) but no action against other studied microorganisms. EOs exhibited antibacterial effects (inhibition zones of 7-15.5 mm). There was considerable variability in the activity of the essential oils. However, no oil presented the highest antibacterial activity against all bacteria. Furthermore, there was no observed correlation between the content of major chemical components and antibacterial activity. P. aeruginosa was the only bacterium not susceptible to any studied essential oil of Australian origin [28] Paenibacillus larvae-American Foulbrood Disease of honeybees' pathogen L. x intermedia and other plants EOs The antibacterial activity of lavandin oil against eight strains of P. larvae was demonstrated, with MIC ranging from 0.45 to 0.6 µL/mL. Though, the stronger activity of some other EOs, such as Cymbopogon citratus (0.05-0.1), Origanum vulgare (0.25-0.45), Satureja hortensis (0.2-0.25), and Thymus vulgaris (0.1-0.15) was observed. On the other hand, a weaker effect was observed for EOs of Eucalyptus globulus (>0.7), Mentha x piperita (0.6-0.65), and Rosmarinus officinalis (0.7). The authors suggested that lemon grass and thyme essential oils have the potential to inhibit Foulbrood Disease in honeybee colonies. [29] Antibacterial and antifungal effects S. aureus, Streptococcus pyogenes, P. aeruginosa, E. coli, Candida albicans L. x intermedia and L. angustifolia EOs, grown in Romania The study evaluated the antimicrobial activity of essential oils against four pathogenic bacteria and one genus of yeasts that causes fungal infections. L. angustifolia EO was noticed to be more active against target bacterial strains compared to L. x intermedia. The calculated MIC value for LI was 0.5 µL/mL, and the MBC value was 1.00 µL/mL, whereas for LA, the MIC was 0.25 µL/mL and MBC was 0.25 µL/mL. On the other hand, the antifungal activity of LI essential oil was three times more effective, and 10 µL of LI oil produced a 17 mm zone of inhibition in the disc diffusion method compared to the 5 mm zone produced by LA oil. [18] Antibacterial and antifungal activity of both L. x intermedia and L. angustifolia was shown. MIC and MLC of both lavender oils were mostly above 2% (v/v). The exception was T. violaceum, where the MIC and MLC did not exceed 0.2%. Accordingly, the antimicrobial activity of both lavender EOs was lower than that of Origanum hirthum, Satureja montana, Monarda didyma, and M. fistulosa. Hydrolates were much weaker, with MICs and MLCs mostly above 50%. LI oil was slightly more active than LA oil against bacteria and fungi. [17] S. aureus, E. coli, C. albicans L. x intermedia and other plants EOs from leaves of Turkish origin The authors studied the antimicrobial activities of EOs of L. x intermedia and other plants using the disk diffusion method. The effect was assessed based on the zones of inhibition (ZI). The antimicrobial effect varied depending on the microorganism type and the dose of the essential oil. The LI EOs exhibited potent effects against C. albicans at doses of 15 µL (resulting in a 30 mm zone of inhibition) and 7.5 µL (20 mm ZI) but moderate effects at lower doses of 3 µL (12 mm ZI) and 1.5 µL (6 mm ZI). The antimicrobial effect was weaker in the case of E. coli (7-12 mm ZIs) and S. aureus (6-13 mm ZIs) in all tested doses. Among all analyzed plants, Thymbra spicata and Satureja macrantha essential oils had stronger antimicrobial activity against all tested microorganisms. [19] Antifungal effect Cladobotryum mycophilum-cobweb disease of button mushroom (Agaricus bisporus) L. x intermedia EO and other plants' EOs In vitro assays showed that the EOs obtained from Thymus vulgaris (median effective concentration EC 50 of 35.5 mg/L) and Satureja montana (42.8 mg/L) were the most effective for inhibiting the mycelial growth of C. mycophilum and were also the most selective between C. mycophilum and A. bisporus. The EOs of L. x intermedia (EC 50 = 146.6 mg/L) and Thymus mastichina (175.7 mg/L) had the strongest antifungal effect against A. bisporus, which could be attributed to the content of alcoholic monoterpenoids. [30] Agricultural pathogens: Mucor piriformis, Botrytis cinerea, Penicillium expansum L. x intermedia 'Grosso' and 'Provence', and L. angustifolia EOs The antifungal effect of essential oils from LA and LI, as well as individual essential oil constituents, was evaluated using disk diffusion assays against three agricultural pathogens. True lavender oil exhibited no significant effect. However, the LI 'Provence' oil showed some antifungal activity against B. cinerea, resulting in a zone of inhibition 7.5 mm at a concentration of 50 µL/mL of oil. Among the terpenes studied, carvacrol was the most effective component (although this component is not present in Lavandula oils), which showed an inhibition zone of 40 mm at a concentration of 50 µL/mL after 24 h. Lavandulol and linalool produced zones of inhibition of 9 and 7-8 mm, respectively. Antifungal effect Aspergillus nidulans, Trichophyton mentagrophytes, Leptosphaeria maculans and Sclerotinia sclerotiorum Australian-grown L. x intermedia 'Grosso', 'Miss Donnington', L. angustifolia, and other Lavandula species EOs and hydrolates All tested essential oils displayed some antifungal activity. EOs derived from LA, and all three tested LI cultivars caused a decrease in the growth of A. nidulans and T. mentagrophytes by at least 50% after 6 days of the experiment. Lavandula stoechas was particularly effective against the two agricultural fungi: L. maculans and S. sclerotiorum. No evidence of antifungal activity was observed for all hydrolates studied. [20] Trichophyton mentagrophytes (tinea pedis causing pathogen) Lavandula x intermedia and other plants' ethyl acetate extracts The results showed that most of the herbs exhibited potent vapor activity against T. mentagrophytes, of which Roman chamomile, curry plant, hyssop, lavandin, marjoram sweet, orange mint, spearmint, monarda, oregano, rosemary, rue sage, tansy, tarragon, thyme common and yarrow showed the most potent activity. The vapors of these herbs exhibited lethal properties by killing over 99.99% of fungus. The lavandin extract exhibited a strong antifungal effect producing a 36 mm zone of inhibition in agar diffusion assay. [31] Ascosphaera apis-a chalkbrood disease of honeybees L. x intermedia and other plants' EOs Lavandin oil was effective at 700 µL/L against 3 of 5 studied A. apis strains regardless of the geographical significance of the strains. At all concentrations tested, coriander oil was the most effective fungistatic control. [21] Summarizing the above findings, there is no doubt that lavandin preparations, such as essential oil or ethanolic extract, possess antimicrobial activity. This activity is at least as strong as the activity of L. angustifolia or stronger. Hydrolates and other lavandin preparations do not exhibit antimicrobial power, or it is significantly weaker. Regarding the antimicrobial activity of essential oils, even though it is proven and well-established, the therapeutic effect is significantly weaker when compared to synthetic antibiotics. Essential oils are volatile, and their ability for quick vaporization can shorten their effectiveness. On the other hand, this drawback can be at least partially overcome by an appropriate drug formulation [6]. Varona et al. demonstrated that the activity of lavandin oil could be enhanced by encapsulation due to the protection and controlled release of the oil components [27]. According to the literature, antimicrobial activity is usually correlated with phenolic, aromatic, or alcoholic components of essential oils. Their main mechanism of action is related to some disruption of the cell membrane and its increased permeability [32][33][34][35]. The main oil component of L. x intermedia, linalool, can destroy bacterial cell walls, change membrane potential, and enhance membrane leakage [36]. The antibacterial effect of EOs is generally more pronounced in the case of G+ bacteria. It is believed that the rigid outer membrane of G-bacteria limits the diffusion of hydrophobic compounds, therefore, protecting them against the harmfulness of EOs components [10]. However, the presented review of the studies of lavandin oil does not always confirm this general belief. Diverse antibacterial powers were observed regardless of gram-positive or negative attribution of studied bacteria. Other Biocidal Articles reporting on the biocidal power of lavandin preparations, other than the antimicrobial power, are not numerous. They are compiled in Table 2. An antiparasitic activity of lavandin was indicated by Moon and coworkers. The scientists studied the effect of lavandin and true lavender essential oils on three protozoal pathogens: Giardia duodenalis, Trichomonas vaginalis, and Hexamita inflata. They demonstrated that oil concentrations of 1% or less could eliminate pathogens in the culture. L. angustifolia essential oil presented a slightly stronger effect than L. x intermedia. The authors stated: "Whether lavender essential oils can be used as a viable treatment for infected water sources or as a treatment of mammalian and/or fish parasitic infection is unknown. The previously unreported finding that these oils are potent anti-protozoan agents should, however, be further investigated" [37]. According to our best knowledge, there are no further reports on the antiprotozoal activity of L. x intermedia EO. At lower levels (0.1%), L. angustifolia was narrowly more effective than L. x intermedia against G. duodenalis and H. inflata. Microscopic analysis suggested that the mode of action of a substance may be via cell lysis, but further studies were needed to confirm the exact mechanism. [37] Root-knot nematode Meloidogyne incognita, walnut root lesion nematode Pratylenchus vulnus Strong toxicity of all tested EOs to nematodes was indicated in vitro, even at low concentrations. The oil of 'Rinaldi Ceroni', with LC 50 equal to 1.2 and 3.1 µg/mL for M. incognita and P. vulnus, respectively, was more potent than that of 'Abrial' and 'Sumiens', with 24.9 and 7.4, 17.4 and 0.5 µg/mL, respectively. When compared to the chemical positive control oxamyl, the EOs showed much higher toxicity only after a 4-h exposure. However, with incubation time, this difference was diminished and not evident after 24-h exposures. Activity differences among the cultivars were less apparent in vivo studies in soil. All studied lavandin EOs also significantly decreased M. incognita eggs density and their hatchability, as well as significantly reduced gall formation on tomato roots. Overall, oil treatments with the lavandin EOs positively impacted tomato plant growth. [38] The study showed the lack of nematicidal activity of L. x intermedia EO against M. javanica. It had an antifeedant effect against S. littoralis but not against other tested insects. The biocidal impact of this EO was strong in the case of S. littoralis (LC 50 25 µg/cm 2 ) and H. lusitanicum (LC 50 28 µg/mg cellulose). The LI 'Super' oil did not show adverse effects on the germination of L. sativa and L. perenne seeds but significantly affected the growth of the root of L. sativa. [41] Confused flour beetle Tribolium confusum (Coleoptera); Radish plant Raphanus sativus L. x intermedia hydrolate from stems and flowers insect repellent allelopathic Behavioral tests with T. confusum showed good repellencies for both flowers and stems LI hydrosols. They exhibited (70-90% repellence for a dose of 12 µL/cm 2 and RD 50 = 3.58 and 3.26 µL/cm 2 for flowers and stems, respectively). However, its repellent effects were lower than the one of the synthetic repellent MR-08 (RD 50 = 0.001 µL/cm 2 ). Regarding the allelopathic activity-both hydrolates inhibited the seed germination of R. sativus. Inhibition was higher in the case of the flower hydrolate (percentage of germination GP = 0%) than stem hydrolate (GP = 24%). [42] Confused flour beetle Tribolium confusum L. x intermedia and other plants' EOs insecticidal Lavandin oil and other tested oils, except Origanum vulgare, exhibited strong toxicity to larvae (LD 50 1.8, 59.7 or 109.9 µL/L air), pupae (37.3 and 38.7), and adults of T. confusum (29.8-81). The fumigant toxicity of these essential oils depended on the insect's developmental stage. [43] The essential oils studied had antifeedant effects against the majority of targets tested. The most active oils were of L. latifolia and T. vulgaris. LI essential oil showed, at a dose of 100 µg/cm 2 , 86% and 88% feeding inhibition for L. decemlineata and S. littoralis, respectively, as well as 96% and 85% settling inhibition in the case of M. persicae and R. padi. Some phytotoxic activity against L. sativa and L. perenne was observed. The oils of Lavandula spp. were the most active against L. sativa, while T. vulgaris oil was the most phytotoxic against L. perenne. In both species, the germination percentage was more affected than the root and leaf length. Antifungal action against studied Fusarium spp. was indicated (EC 50 0.3-1.1 mg/mL). [44] Spotted wing Drosophila suzukii L. x intermedia EOs 'Grosso' and 'Provence', L. latifolia and other plants' EOs insecticidal oviposition deterrent LI oil exhibited an insecticidal effect with EC 50 ranging from 5.22 to 9.01 µL/L air in fumigation toxicity assays and 0.86-12.58% in contact toxicity assays. Linalool was the most effective monoterpene in fumigation assays, and L. latifolia essential oil was found to be the most effective whole essential oil (EC 50 3.28-4.21 µL/L air). In contact toxicity assays, 1,8-cineole was the most effective monoterpene, and L. latifolia EO was the most effective of all tested oils with EC 50 of 0.69%. [45] Insects: Sitophilus zeamais, Cryptolestes ferrugineus, Tenebrio molitor L. x intermedia (Italian origin) and other plants' EOs insect repellent Lavandin oil showed a repellent activity for S. zeamais and C. ferrugineus, but the effect on T. molitor was less evident. [46] Bean weevil Acanthoscelides obtectus L. x intermedia and other plants' EOs insecticidal All tested essential oils exhibited robust activity against A. obtectus adults, with varying LC 50 ranging values from 0.5-19 mg/L air depending on insect sex and the type of the essential oils. Lavandin and rosemary leaf essential oils were the most active (LC 50 0.5-2.4 mg/L air), followed by EO from lavender flowers (LC 50 1.9-3.7 mg/L air). Eucalyptus EOs had weaker fumigant activity with LC 50 of 3-19 mg/L air. A positive correlation between total oxygenated monoterpenoid content and insecticidal activity was observed. Terpinen-4-ol, camphor, and 1,8-cineole alone exhibited the lowest LC 50 values. [47] Plants: annual ryegrass Lolium rigidum, canola Brassica napus, wheat Triticum aestivum, subterranean clover Trifolium subterraneum Aqueous extracts of L. x intermedia 'Grosso' and other lavender species allelopathic L. x intermedia was the most phytotoxic among tested Lavandula species. It was determined that the stem and leaf extract of LI 'Grosso' significantly reduced the root growth of several tested plant species. The growth of L. rigidum roots was completely inhibited with an extract concentration of 10%. The fraction consisting of coumarin and 7-methoxycoumarin was the most phytotoxic. Coumarin was supposed to be the most phytotoxic and responsible for the observed phytotoxicity of the lavandin extract. Soil trials were conducted using the coumarin standard and the lavandin extract. In both cases, shoot lengths and weights were significantly reduced by a post-emergence application at all concentrations tested. [48] Lavandin oil was also investigated regarding nematicidal power. D'Addabbo et al. observed a powerful biocidal effect on pathogenic root-knot nematode-Meloidogyne incognita and walnut root lesion nematode-Pratylenchus vulnus. The activity was so high (LC 50 equal to 1.2 and 3.1 µg/mL for one essential oil of one LI cultivar) that authors postulated the oil as a component of the new nematicidal formulations alternative to synthetic nematicides. The effectiveness of lavandin oils of three different cultivars was evaluated both in vitro and in vivo in soil. LI oils significantly reduced parasite eggs' density, their hatchability, and the gall formation on roots, overall positively impacting the growth of the studied tomato plants [38]. A potent nematicidal effect was also documented in vitro and in vivo by Andrés et al. [39]. They studied the lavandin and some other plants' hydrolates, by-products produced during steam distillation, against the root-knot nematode Meloidogyne javanica. All tested hydrolates showed nematicidal effects in vivo (both on larvae mortality and suppression of egg hatching). The nematicidal active components of lavandin hydrolates were found to be present in the aqueous fraction, indicating that some polar constituents of lavandin, rather than those unpolar, are responsible for the observed effect. This is supported by the lack of nematicidal activity of L. x intermedia essential oil against the same nematode-M. javanica observed by de Elguea-Culebras et al. [41]. No nematicidal effect of lavandin oil was also observed by Park and coworkers against pinewood nematode (Bursaphelenchus xylophilus), but they did not present the detailed results for lavandin it and focused only on three chosen essential oils from other plants [40]. Considering all the above, the effectiveness of the nematicidal efficacy of lavandin is uncertain, and more studies are needed in this field. The above-described nematodes are loss-making pests in agriculture. The other, an even bigger group of pests, is insects. Therefore, lavandin oils were also tested regarding their insecticidal or repelling properties for potential use as natural-based plant protection agents. The repelling properties of LI essential oils were indicated for the following insects: maize weevil Sitophilus zeamais, rusty grain beetle Cryptolestes ferrugineus, and yellow mealworm beetle Tenebrio molitor [46]. LI hydrosols also exhibited repellency in studies on the confused flour beetle Tribolium confusum [42]. This insect was also studied by Theou and coworkers [43]. They tested lavandin and other essential oils and found out that all tested oils, except oregano oil, exhibited strong toxicity to all developmental stages of the pest. They postulated the EOs as fumigants used for the protection of stored products in storehouses. The insecticidal power of lavandin oils was also recognized by other researchers who showed its efficacy against bean weevil Acanthoscelides obtectus, colorado potato beetle Leptinotarsa decemlineata, and spotted wing drosophila Drosophila suzukii. A positive correlation between total oxygenated monoterpenoid content and insecticidal activity was observed, with linalool and 1,8-cineole being the most effective terpenes [41,45,47]. Essential oils are also generally known for their allelopathic activity. In the case of lavandin oil, according to our knowledge, there are two studies concerning it. Both studies examined the effect of lavandin oil on lettuce Lactuca sativa and English ryegrass Lolium perenne. de Elguea-Culebras et al. showed low to moderate toxicity for the assayed oils in the allelopathic test. The LI essential oil did not show negative effects on the germination of L. sativa but reduced the growth of its root [41]. Santana and coworkers observed some phytotoxic activity against L. sativa and L. perenne seeds and observed negative effects on germination and growth [44]. Regarding hydrolates of lavandin, both hydrolates from flowers and stems were able to inhibit the germination of radish Raphanus sativus, with a stronger effect of the flower hydrolate [42]. Extensive allelopathic studies were conducted by Haig and coworkers, who studied the effects of aqueous extracts of several lavender species, including L. x intermedia [48]. The researchers indicated that L. x intermedia was the most phytotoxic among the tested species. It showed the effect on all four tested plant species. After fractionization of the LI extract, they found that the fraction consisting of coumarin and 7-methoxycoumarin was the most phytotoxic, and the coumarin was largely responsible for the effect. Coumarin is a well-known phytotoxin, and lavandin is known to contain more coumarins than, for example, L. angustifolia [48,49]. This explains the stronger effect of lavandin extracts as a phytotoxic agent. Antioxidant Power Oxidative stress is caused by an imbalance between the production and deactivation of oxygen-reactive species (ROS). ROS are naturally generated as by-products of oxygen metabolism and can play some physiological roles (among others-cell signaling). In stressful environmental conditions and the presence of xenobiotics, the production of ROS increases, leading to the imbalance that causes cell death and some pathologies, such as some cancers or neurodegenerative disorders. Therefore, apart from the endogenous antioxidant systems, also exogenous antioxidants are intensively studied. Antioxidants are compounds capable of slowing or retarding the oxidation of an oxidizable material and protecting organisms from oxidative stress. As synthetic antioxidants such as butylated hydroxyanisole (BHA) or butylhydroxytoluene (BHT) are suspected to be potentially harmful to human health, many natural products, including essential oil and plant extracts, have been investigated for their antioxidant properties [50][51][52]. Antioxidant power is the second, just after biocidal, activity reported in the literature for L. x intermedia. We have found ten original research articles reporting it: three of them relate to essential oil, six to plant solvent extracts, and one to both of the formulations. Most of them relate to in vitro studies based on simple radical scavenging assays. However, there is a report by Hancianu et al., who conducted a detailed study on Wistar rats subjected to scopolamine-an induced rat model of dementia [53]. The week-long inhalation of the essential oils of L. angustifolia and L. x intermedia by rats for a week induced some significant biochemical changes in their brains. Temporal lobe homogenates indicated increased activity of catalase (CAT), superoxide dismutase (SOD), and glutathione peroxidase (GPX), increased content of reduced glutathione (GSH), and reduced malondialdehyde (MDA) level. Rats in both lavender groups exhibited a significant decrease in MDA levels. MDA is one of the final products of polyunsaturated fatty acids peroxidation in the cells. Decreased MDA levels reflected the reduced lipid peroxidation caused by free radicals. Increased activity of CAT, SOD, and GPX, which create an enzymatic endogenous antioxidant defensive system [50], results in less oxidative stress. The results of the experiment show that lavender oils can be an indirect antioxidant. Indirect antioxidants enhance natural antioxidant defenses in living organisms by inducing the expression or increasing the activity of antioxidant enzymes [51]. Additionally, the authors reported that DNA cleavage patterns (present in the scopolamine-alone-treated rat group) were absent in the scopolamine-treated rats exposed to lavender oils, suggesting that lavender oils possess antiapoptotic and neuroprotective activity [53]. The antioxidant activity of lavandin EOs in vitro was indicated by Carrasco and coworkers [54,55]. In one paper, they described the studies of lavandin 'Abrial', 'Super', and 'Grosso' essential oils of Spanish origin. They conducted five different antioxidant assays: ABTS, DPPH, oxygen radical absorbance capacity (ORAC), chelating power (ChP), and reducing power (RdP). Studied L. x intermedia oils showed moderate antioxidant activities with varying activity for different cultivars and different tests. The authors also indicated mild inhibition of lipoxygenase (LOX) in vitro. LOX is a crucial enzyme in the transformation of arachidonic acid into leukotrienes which are involved in the occurrence of inflammation. LOX inhibition can lower leukotriene levels, thereby delivering an antiinflammatory effect [54,56]. In the other article, Carrasco et al. studied the antioxidant activity and hyaluronidase inhibition of different species of Lavandula and Thymus essential oils. Hyaluronidase inhibitors can potentially serve as anti-inflammatory, anti-microbial agents, and anti-aging agents [57]. However, the researchers did not observe any inhibitory effect of lavandin oil and a very weak effect of LA oil on the enzyme. Regarding antioxidant activity, authors indicated some antioxidant activity of LI oils, in most assays, weaker than for true lavender oil, while both lavender oils showed significantly lower activity than Thymus zygis, which was caused by the high presence of thymol, the compound with known and strong antioxidant properties [55]. In general, phenolic agents act as antioxidants due to their high reactivity with peroxyl radicals, which are disabled by hydrogen atom transfer [51]. Lavender essential oils do not contain any significant amounts of phenolic terpenes and phenylpropanoids. In general, essential oils containing no or little phenols and cyclohexadiene-like components (e.g., γ-terpinene, α-terpinene, and α-phellandrene) do not exhibit significant direct antioxidant potency [51]. Direct antioxidants are compounds able to impair the radical chain reaction causing oxidation. However, despite the low presence of high-impact direct antioxidants, lavandin oil might show an indirect antioxidant property, as indicated by the above-described experiment of Hancianu et al. [53]. One more issue needs to be raised whenever the antioxidant activities of essential oils are assessed based on indirect methods such as the DPPH, ABTS, FRAP, or Folin-Ciocalteu tests. These methods are flawed models of antioxidant properties and are often inappropriate for reliable measuring antioxidant properties of essential oils. For example, a very popular and basic DPPH assay gives positive outcomes for some essential oils, not due to its antioxidant activity, but rather due to binding a hydrogen atom from C-H bonds from terpenes with a sufficiently low bond dissociation enthalpy such as αor β-pinene and limonene. Therefore, the discoloration of the reactant might not necessarily indicate the antioxidant potency but also the presence of highly oxidizable compounds in the essential oil [51]. Moreover, the results are often difficult to compare due to multiple assays and different and inconsistent units. Thus, the discussion and most of the comparations are usually performed between samples from one experimental setup. Lavandin ethanolic extracts exhibit higher antioxidant activity than lavandin oil [25]. Table 3, among other biological activities, presents a reported antioxidant activity for different kinds of lavender extracts. The superior to EO antioxidant power of the lavandin extract is caused by a different chemical composition compared, namely more abundant flavonoids, coumarins, phenolic acids, and their glycosides [25,49,52,58]. Higher activity was shown for extracts of post-distillation waste than the raw plant material before distillation [25,59]. Regarding the plant part, the most potent were found to be ethanolic extracts of leaves, then flowers and inflorescence stalk [60]. In one experiment, Berrington and Lall studied the antioxidant power of acetone LI and LA extracts, as well as other plant species. They have found that both lavender species had the lowest antioxidant activity (DPPH assay) among tested plant extracts (Origanum vulgare, Rosmarinus officinalis, Thymus vulgaris, Petroselinum crispum, Foeniculum vulgare, and Capsicum annuum) [61]. Whenever both activities of LI and LA were assessed in one study, they were rather similar, with varying antioxidant power depending on the assay and tested cultivar. True lavender extracts contained more phenolic acids and flavonoids, while coumarins were at higher levels in lavandin extracts [49,60,61]. Looking at the gathered studies, we cannot claim any superiority of L. angustifolia over L. x intermedia in this field. Table 3. Bioactivities of lavandin natural products reported in the literature (other than the biocidal). Only original research articles were considered. Studied Agents Methods of Assessment Observations, Conclusions Activity Citation A mixture of L. x intermedia 'Super', Citrus bergamia (bergamot), and Cananga odorata (ylang-ylang) EOs Pittsburgh Sleep Quality Index (PSQI), a human-randomized, double-blind crossover study The goal of the investigation was to determine if there was a significant difference between the sleep quality of patients in cardiac rehabilitation who inhaled a placebo and those who inhaled an aroma (for five consecutive nights). The sleep quality of participants receiving the studied oil mixture was significantly better than that of participants receiving the placebo oil. Improving sleep quality [62] L. x intermedia 'Grosso' EO, French origin In vivo mice; in vitro guinea pigs and rats; platelet aggregation studies, clot retraction assay, tail transection bleeding Antiplatelet properties of lavender oil were exhibited on guinea-pig platelet-rich plasma towards platelet aggregation induced by arachidonic acid, U46619, collagen, and ADP. Lavandin oil (100 mg/kg/day for five days) significantly reduced thrombotic events in mice models of pulmonary thromboembolism without inducing hemorrhagic complications at variance with aspirin used as a reference drug. Major components of the oil were also studied, but none of them triggered such effects as the oil. antithrombotic [63] L. x intermedia 'Okanagan' and wild-type EOs In vivo mice studies; survival and body weight measurements, histopathological scoring, bacterial count, immunofluorescence, RT-PCR Lavandin oil was beneficial in the case of acute colitis induced by Citrobacter rodentium in mice (reduced morbidity and mortality). The oil lowered the expression of iNOS, IFN-γ, IL-22, and MIP-2α mRNA and reduced the inflammation compared with infected control mice. LI oil prevented severe cecal mucosal damage during the infection. antiinflammatory [14] L. x intermedia 'Grosso' EO, French origin In vivo rats and mice study; acetic acid writhing test, the activity cage, hot plate tests, Rainsford's method Orally administered or inhaled lavandin oil reduced the writhing response to acetic acid treatment. Inhalation of LI oil produced an inhibition of the hot-plate response proportional to the time of exposure to oil vapors. Moreover, when taken orally, lavandin oil, as well as its separate main components-linalool and linalyl acetate oral administration protected against acute ethanol-induced gastric ulcers. analgesic gastroprotective [64] Studied Agents Methods of Assessment Observations, Conclusions Activity Citation L. x intermedia, L. angustifolia EOs (Romanian origin) and Silexan In vivo rats' studies; behavioral tests: Y-maze task, elevated plus-maze task, forced swimming test, radial arm-maze task The subjects were Wistar rats subjected to scopolamine-an induced rat model of dementia. Daily respiratory exposure for a week of all three tested lavender oils reduced anxiety and depression (based on elevated plus-maze and forced swimming tests). Moreover, the performance in Y-maze and radial arm-maze tests was improved, suggesting positive effects on spatial memory. anxiolytic antidepressant improve spatial memory [65] L. x intermedia, L. angustifolia EOs (Romanian origin) and Silexan In vivo rats' studies; Superoxide dismutase (SOD), glutathione peroxidase (GPX), and catalase (CAT) specific activities, the total content of reduced glutathione (GSH), malondialdehyde (MDA) level (lipid peroxidation), and DNA fragmentation assays Potent neuroprotective effects of both lavender oils against scopolamine-induced oxidative stress in the rat brain were shown. The subjects were Wistar rats subjected to scopolamine-an induced rat model of dementia. The EOs were administered an electronic vaporizer in a Plexiglas chamber. Daily subacute exposure for a week to the EOs significantly increased the activity of antioxidant enzymes (SOD, GPX, and CAT), increased the content of reduced GSH, and reduced lipid peroxidation (MDA level) in rat temporal lobe homogenates, suggesting antioxidant potential. DNA cleavage patterns were absent in the lavender groups, suggesting anti-apoptotic activity. The substantial antioxidant and antiapoptotic effect of EOs has been recognized as a cause of neuroprotective effects against scopolamine-induced oxidative stress in the rat brain. antioxidant antiapoptotic neuroprotective [53] L. x intermedia 'Grosso' and 'Super; L. angustifolia, L. latifolia, and other Thymus zygis and Thymus hyemalis EOs DPPH, ORAC, chelating power test, nitric oxide scavenging capacity, reducing power, TBARS, hyaluronidase inhibitory activity LI oils had the most potent chelating power among tested EOs. It was explained by the high contribution of ester and ether groups in their essential oils. The oils displayed significant values in hydroxyl and peroxyl radical scavenging assays, however lower than LA oil. That was explained by the high concentration of alcohol and ester groups, mostly linalool and linalyl acetate. Both IL essential oil also showed weaker performance in DPPH and ABTS tests than LA oil and substantially lower than the performance of T. zygis EO due to its high thymol content. No inhibition of hyaluronidase by lavandin essential oil was observed compared to weak for L. angustifolia and strong for T. zygis (high thymol chemotype). antioxidant no hyaluronidase inhibition [55] L. x intermedia 'Abrial', 'Super' and 'Grosso' EOs, Spanish origin ORAC, ABTS, DPPH, chelating power and reducing power; lipoxygenase inhibitory test Studied L. x intermedia oils showed moderate antioxidant activities, especially due to the presence of linalool and linalyl acetate. A mild lipoxygenase inhibitory effect was observed, suggesting a potential anti-inflammatory effect of LI oils, mostly due to the presence of linalool and camphor. antioxidant; anti-lipoxygenase (potential antiinflammatory) [54] L. x intermedia 'Grosso' EO and its nanoformulations MTT assay Studied EO and its nanoformulations exhibited some antiproliferative activity on various tested cell lines. The EC50 values indicated that Caco-2 (human colorectal adenocarcinoma), MCF7 (human breast adenocarcinoma), and MCF10A (normal breast epithelial) cells were more resilient to the treatment, while CCRF-CEM (human lymphoblastic leukemia) and SHSY5Y (human neuroblastoma) cells were more responsive to it. Nanoformulation of lavandin oils demonstrated greater antiproliferative activity compared to EO, especially in cases of more resistant cell lines. antiproliferative [66] L. x intermedia EO MTT assay Some cytotoxicity of T. vulgaris, R. officinalis, and L. x intermedia EOs on the MCF-7 (human breast cancer) cell line was observed in vitro. anticancer [67] The lavandin waste material had a clear antioxidant activity, mostly due to phenolic content. Twenty-three phenolic compounds were identified by liquid chromatography in methanol macerates from lavandin waste, including phenolic acids, hydroxycinnamoylquinic acid derivatives, glucosides of hydroxycinnamic acids, and flavonoids. Lavandin waste was found to be a relatively poor source of rosmarinic acid, though it had high levels of chlorogenic acid and hydroxycinnamic acid glucosides, which are particularly active in scavenging hydroxyl radicals. antioxidant [58] L. x intermedia 'Bila', 'Budrovka SN', 'Budrovka'; EOs and ethanolic extracts of flowers before distillation and of post-distillation waste material; grown in Croatia DPPH, LC-PDA-ESI-MS Lavandin ethanolic extracts exhibited higher antioxidant activity than EOs. Higher activity was shown for extracts of post-distillation waste than flowers before distillation (more than 90% activity at a concentration of 1 mg/mL). To achieve a similar effect with flower extracts and EOs, a concentration of 2.5 or 40 times higher, respectively, had to be used. The better antioxidant activity of the ethanolic extract was attributed to their different chemical composition compared to EOs. The main components of both types of lavandin ethanol extract were hydroxycinnamic acid glycosides and essential oils-linalool, linalyl acetate, 1,8-cineole, and camphor. Extracts of post-distillation lavandin waste contained a relatively higher concentration of identified compounds than of fresh flowers, which could explain their better antioxidant activity. The tested cultivars 'Bila', 'Budrovka SN', and 'Budrovka' showed twice as much antioxidant activity as compared to the 'Sumiens', 'Super A', and 'Grosso' tested by other researchers. antioxidant [25] L. x intermedia 'Budrovka', and L. angustifolia flower, inflorescence stalk, and leaf ethanolic extracts, Croatian origin DPPH, iron chelating activity, reducing power, lipid peroxidation inhibition, total antioxidant capacity assays; HPTLC Ethanolic extracts of leaves of LI and LA were mainly the most active in terms of antioxidant power among the studied plant parts. Flower extracts were slightly weaker, and inflorescence stalk extracts had the lowest antioxidant activity. The performed antioxidant tests, except total antioxidant capacity, demonstrated that lavandin 'Budrovka' extracts were slightly less potent than those of L. angustifolia. It was justified by their lower polyphenolic contents. Rosmarinic acid was the most abundant polyphenolic constituent of both tested Lavandula extracts and was considered the main contributor to the exhibited antioxidant power. The authors concluded that L. x intermedia 'Budrovka' is as potent an antioxidant as L. angustifolia. antioxidant [60] Macerates, decoctions, and other extracts from L. x intermedia 'Grosso', 'Gros Bleu', and L. angustifolia, Polish origin TPC, TPF, HPLC-DAD/MS, DPPH, FRAP HPLC analysis showed the presence of phenolic acids (rosmarinic acid, ferulic acid glucoside, caffeic acid, ellagic acid), flavonoids (morin, isoquercitrin, vanillin), and coumarins (herniarin, coumarin). The content of phenolic acids and flavonoids was higher in lavender extracts, while coumarins were at higher levels in lavandin extracts of both cultivars, regardless of extract type. The highest radical scavenging and reducing properties were observed for ultrasonic-assisted extracts. Aqueous-ethanolic extracts (UAE and macerates) showed more antioxidant power than aqueous extracts (decoctions). No significant difference between the species was observed. Other Activities Lavender essential oils are very popular in complementary/alternative medicine and are commonly used in aromatherapy to reduce stress, increase relaxation, and improve the quality of sleep [7,52,69,70]. The anxiolytic action of L. angustifolia oil was proven both in rodents and humans by both inhalation and ingestion. The most substantial evidence comes from the studies of Silexan, a patented active substance comprised of Lavandula angustifolia essential oil produced from flowers, standardized, compliant with European Pharmacopeia, and manufactured by Dr. Willmar Schwabe GmbH & Co. KG in Germany. Oral administration of Silexan impacted positively depressed mood, sleep disturbances, and the overall quality of life of clinical trial participants. It was also demonstrated that sleep improvement is a result of the anxiolytic effect, not the sedative effect per se [71][72][73]. The mechanisms underlying the anxiolytic effects of Silexan are not certain. Action through the mediation of gamma-aminobutyric acid (GABA) was proposed by Aoshima and Hamamoto [74]. Schuwald et al. demonstrated that Silexan inhibited voltagedependent calcium channels (VOCCs) in neuronal cells at nanomolar concentrations [75]. It has been speculated that under anxiety or stress disorders, an enhanced calcium ions influx through VOCCs could increase the release of neurotransmitters such as glutamate and norepinephrine, which are involved in the pathogenesis of these diseases. Baldinger and coworkers showed that Silexan reduced the 5-HT1A receptor binding potential in the brain clusters such as the temporal gyrus, the fusiform gyrus, the hippocampus, the insula, and the anterior cingulate cortex. This led to an increase in extracellular serotonin content. Most probably, the effect of Silexan, apart from VOCCs inhibition, is additionally mediated via the serotonergic neurotransmitter system, particularly the 5-HT1A receptor, not through a GABAnergic mechanism [71][72][73]. The results of Siloxane might suggest a similar action of lavandin oil. However, there are almost few studies to support it. Hancianu et al. studied the effect of Romanian essential oils from L. x intermedia and L. angustifolia, as well as Siloxan, on a dementia rat model. They found that not only Siloxan but also both studied lavender oils acted neuroprotective and improved spacial memory and performance in various tests suggesting anxiolytic and antidepressant activity [53,65]. Regarding the influence on sleep, one human study with a mixture of lavandin oil with bergamot and ylang-ylang oils was described. The five-day-long aromatherapy improved the subjective assessment of sleep quality in the examined patients in cardiac rehabilitation [62]. Another often-raised biological action of lavender oils is their anti-inflammatory effect. Linalool, the main component of lavender oil, has been reported to have antiinflammatory effects [7,52,76,77]. Huo and coworkers tested the action of this terpene on lipopolysaccharide(LPS)-induced production of inflammatory mediators such as tumor necrosis factor α (TNF-α) a and interleukin 6 (IL-6) [78]. LPS is a component of the outer membrane of G-bacteria, which triggers a strong immune response. Scientists have found that linalool reduced the production of TNF-α and IL-6 both in stimulated macrophages in vitro and in vivo in lung injury mouse models. They showed that linalool treatment attenuated lung histopathology in mice. In search of molecular mechanisms of the linalool action, the researchers investigated the phosphorylation of some proteins in NF-κB and MAPK pathways. Nuclear factor-kB (NF-κB) is the critical dimer protein controlling the expression of over 500 genes, including many inflammation-associated factors. Many agents and stimuli can activate NF-κB through canonical and noncanonical pathways. Usually, NF-κB is kept in the cytoplasm in inactive form due to its binding with its inhibitor (lκB). After some stimuli, such as LPS, IκB is phosphorylated and later degraded. The released NF-κB is translocated to the nucleus, followed by the p65 subunit phosphorylation, acetylation, methylation, as well as subsequent DNA binding and gene transcription. In this way, nuclear factor-kB activation mediates the activation of proinflammatory genes, including TNF-a and IL-6 [78,79]. Huo and coworkers indicated that linalool blocked LPSinduced IκBα phosphorylation and consequently prevented NF-kB activation. They also noticed reduced phosphorylation of ERK, JNK, and p38 in the MAPK signaling pathway, another effect that led to the anti-inflammatory activity of linalool. The effect of linalool on acute lung inflammation induced by other stress stimuli-cigarette smoke (CS)-was investigated by Ma et al. in vivo in mice [80]. It was indicated that linalool significantly reduced the production of TNF-α, and IL-6, along with some other inflammatory mediators. Overall, it inhibited the infiltration of inflammatory cells and lung inflammation. The researchers noted that the terpene suppressed CS-induced NF-κB activation by inhibiting CS-induced IκBα and p65 NF-κB protein phosphorylation. Therefore, the demonstrated effect of linalool is in agreement with the former research conducted by Huo et al. [78]. This mechanism might be responsible for the reported anti-inflammatory properties of different Lavandula essential oils. Specifically to L. x intermedia, we have found a study on this activity-the above-mentioned study of Baker and coworkers on the lavandin oil effect on mice acute colitis induced by Citrobacter rodentium bacteria [14]. The oil administration lowered the expression of inducible nitric oxide synthase, interferon-gamma, interleukin 22, and macrophage inflammatory protein-2α gene expression and decreased neutrophil and macrophage infiltration. In colitic mice, oral gavage with lavandin oil resulted in milder disease, decreased morbidity and mortality, and reduced intestinal tissue damage. Barocelli et al. also noticed the gastroprotective effect of lavandin oil, as well as linalool and linalyl acetate, delivered separately. LI oil administration protected against acute ethanol-induced gastric ulcers in rats, but the mechanism of its protective action was not elucidated [64]. They have also investigated the analgesic effect of chemical and thermic stimuli. Lavandin oil, especially when inhaled, prolonged the response to unpleasant stimuli, suggesting an antinociceptive effect. However, some authors postulated that, instead of having a direct analgesic effect, inhalation of lavender oil may cause a more positive attitude and therefore alter the subjective perception of pain unpleasantness [7,81]. Linalool, the main lavender terpene, was demonstrated to induce analgesic effects in mice-significantly increasing the pain threshold and attenuating pain behaviors. Antinociceptive effects were absent in olfactory-deprived mice in which the olfactory epithelium was damaged. Thus, the action of linalool might be triggered by the olfactory sensory input. An immunohistochemical study revealed that linalool activated hypothalamic orexin neurons, crucial mediators for pain processing. Still, the actual mechanism is not understood [82,83]. Table 3 also presents the work by Ballabeni and coworkers, who also indicated the antithrombotic and anti-platelet activity of lavandin oil both in vitro and in vivo. The oil administration significantly reduced thrombotic events in mice models of pulmonary thromboembolism variance with aspirin used as a reference drug but without inducing hemorrhagic complications as acetylsalicylic acid. Regarding the potential anticancer properties of lavandin, there are only a few investigations. Berrington et al. studied the acetone extracts of LI and LL on the cervical epithelial carcinoma cell line and observed no anticancer activity [61]. Tabatabaei et al. observed the potential anticancer/antiproliferative activity of lavandin oil on the human breast cancer cell line MCF-7 [67], and Ovidi and colleagues found similar activity on different cancerous cell lines [66]. The latter authors also found that nanoformulation of the essential oil increased its antiproliferative activity. As already mentioned, the biological effects of lavandin oil, which have been tested and verified, are mostly related to biocidal or antioxidant activity in vitro. These effects do not represent the full potential of its action In vivo. Lavandin oil is widely used in aromatherapy and massages. Despite this, the therapeutic effects of this oil have been largely overlooked in scientific studies when compared to its parent species. When L. x intermedia was studied and compared to L. angustifolia in terms of the biological/therapeutic effects, it usually gave a similar performance, in the case of antimicrobial-even more powerful. The main components of both lavender oil-linalool and linalyl acetate, which is quickly metabolized in vivo to linalool [84]-affect molecular pathways and induce some biological activities. Thus, we can expect similar biological activities in general. The differences in composition lie in the secondary and trace constituents, such as camphor, 1,8-cineole, and borneol, which sum up to approximately 30% of the oil. These terpenoids are usually alleged of increased the biocidal action of LI and LL, but increased biocidal properties are a double-edged sword. Of the three mentioned, especially camphor, despite its wide use in pharmacy, is considered to be potentially harmful to health, and the toxicity of camphor is welldocumented [85]. Although most cases of camphor poisoning were due to oral ingestion, a few reports indicate that toxic doses of camphor can be absorbed through inhalation and skin contact too. It has been estimated that severe toxicity, which can cause convulsions, may occur in adults at a dose of around 34 milligrams per kilogram of body weight [86]. The typical signs of camphor poisoning when consumed by humans include headaches, nausea, vomiting, dizziness, muscle stimulation causing tremors and twitching, seizures, and delirium. The severity of these symptoms varies depending on the amount of camphor ingested [85]. Therefore, the United States Food and Drug Administration in 1983 set a limit of 11% in consumer products, Ph. Eur. allows the maximum dosage of Camphora racemica or D-Camphora as 10% when admitted topically [85,87]. It is possible that one of the reasons, but nowhere in the literature explicitly stated, why LI and LL are less popular in traditional medicine than lavender is the toxicity of camphor. On the other hand, camphor is appreciated in medicine and commonly used in topical drug formulations. Additionally, lavandin poisoning is uncommon. To the best of our knowledge, there are no described cases of lavandin poisoning in the literature, with the exception of one case involving an 18-month-old infant who ingested a homemade lavandin extract [88]. Unfortunately, the formulation and method of preparation of this extract were not described. However, the authors detected linalyl esters and acetone in both the extract and the patient's blood, suggesting that it may have been an acetone extract. Secondary and minor components can influence the overall biological action of linalool. They may be indifferent or interact with each other leading to synergistic or antagonistic effects. Therefore, the activity of essential oil can sometimes be stronger than that of it its main components. It is generally believed that minor chemicals play a critical role in synergistic activity. The compositional complexity and natural variability of the plant material make this kind of research challenging. The Use of Lavandin in Industry and Everyday Life Despite the numerous biological effects of lavandin, which have been documented and described in the last chapter, the majority of lavandin usage, both in industry and everyday practice, is just due to its aroma. Albeit the lavandin is also cultivated for its decorative values and for obtaining herb/dried plant material, it is widely cultivated mainly for its essential oil, and this raw material prevails in use. Indeed, lavandin oil found its use in the perfume industry, the soap industry, cosmetics, and aromatherapy [54,[89][90][91][92][93][94][95][96][97]. It is also used as a fragrance in a variety of household products (detergents, room sprays, and industrial perfumes) and food beverages because of its fresh and herbaceous odor and availability [89,90,96,[98][99][100]. Fragrance-Related Usages As mentioned in the first part, there are many cultivars of lavandin in cultivation for its essential oil, among others 'Abrial', 'Grosso', 'Super', 'Hidcote Giant', 'Dutch', 'Grappenhall', 'Provence', 'Seal', 'Sumian', and 'Budrovka' [90,94,101,102]. The 'Abrial' is highly valued for its fragrance-similar to that of true lavender, while 'Grosso' is currently the most cultivated for economic reasons. Nevertheless, the 'Grosso' scent may not be at first as pleasant as that of the 'Abrial', but the heaviness and truculence of the 'Grosso' may be an asset-depending on the purpose of its use [93]. The lavandin oil is a pale yellow to almost colorless liquid and has a strong herbaceous odor with a fresh camphene-cineolelike top note. Interestingly, in the perfume or fragrance industry, under the heading of "lavender notes", the whole group is hidden: lavender oil itself, spike lavender oil, and lavandin oil. A small percentage of "lavender" supports the complex of bergamot and oak moss within a great number of Chypre-type perfume compositions (e.g., 5% of lavandin oil). Higher percentages are found in Fougére-type perfumes ( Table 4) which often belong to one of the most successful market segments [100]. In general, lavandin oil is used in large quantities for a fresh note in perfumes. It blends well with natural and synthetic perfumery products such as citronella, cypress, decyl alcohol, geranium oil, pine needle oil, oregano oil, patchouli, and thyme oil) [96]. In toilet soap fragrances, for instance, lavandin oil is used as the only fragrance [100]. It also goes well with detergent products [96]. Another important use of lavandin is the production of absolute. Absolutes are indispensable ingredients in perfumery. The lavandin absolute is a dark green viscous liquid of herbaceous odor (sweeter than the essential oil), reminding the flowering lavandin. It has been used for herbaceous, Fougére, new-mown-hay types, floral fragrances, forest notes, and refreshing colognes. It blends well with clove oil, bergamot, lime, and patchouli and softens rough ionones [96]. According to the International Nomenclature of Cosmetic Ingredients (INCI), lavandin essential oil should be indicated as Lavandula Hybrida Oil or Lavandula Intermedia Oil in the composition of cosmetics. However, in the official documents of the European Commission (cosmetic ingredients database: CosIng) [103], lavandin oil appears under several names: two already mentioned (Lavandula Hybrida Oil and Lavandula Intermedia Oil), which refer to the general name of the species, in addition to other names which refer to lavandin varieties: Lavandula Hybrida Abrial Herb Oil, Lavandula Hybrida Barreme Herb Oil, Lavandula Hybrida Grosso Herb Oil (Table 5). Table 5 was prepared based on the CosIng ingredient index. It contains a list of lavandin cosmetic raw materials with a short description. INCI names containing "Lavandula Hybrida" refer to oils, extracts, and hydrosols obtained from lavandin flowers. In turn, those with "Lavandula Intermedia" in the name refer to products obtained from the whole herb (leaves, flowers, stems). It is worth noting that the flowers themselves are also used as a cosmetic ingredient [103]. In the USA, cosmetic ingredients are laid down in the Title 21 of the Code of Federal Regulations (CFR), reserved for rules of the Food and Drug Administration. There, it is stated that essential oils, oleoresins (solvent-free), and natural extractives (including distillates) are generally recognized as safe, for their intended use, within the meaning of section 409 of the Act [105]. Is LI oil a popular cosmetic raw material in general, and is it often used in cosmetic formulations? It is difficult to estimate as there are no available registers that contain reliable data of such kind. We performed a rough evaluation based on Internet resources, mainly the Environmental Working Group (EWG) and INCI Decoder (science-based ingredient verifying tool). EWG (Environmental Working Group) is an independent nonprofit organization operating in the US that issues various product safety warnings and customer guides. It collected information about 504 different cosmetics containing lavandin oil on the US market, and the summary based on its register is presented in [107]. When analyzing this data, it can be noticed that cleansing products predominate. In second place are diverse types of skincare products. Scented candles containing this oil are also available for sale [108,109]. Naturally, perfumes and Eau de Toilettes containing this ingredient should also be mentioned [110][111][112][113]. Medicinal/Therapeutic Use As extensively discussed in Chapter 2, lavandin and lavandin oil have a number of scientifically proven biological properties. Although not listed in the European Pharmacopoeia as a traditional medicinal plant, it is recognized by WHO [87,114]. Currently, lavandin oil is widely used in aromatherapy. As an alternative to synthetic formulations, essential oil blends are used for natural disinfection and room freshening. Lavandin-based products with claims of antidepressant, anxiolytic, analgesic, neuro-, and gastroprotective effects, as well as improved memory and sleep, can be purchased. These aromatherapeutic claims can be found on many cosmetics labels, such as bath and massage oils. They are not entirely unfounded (as shown in the previous chapter). However, they should be treated cautiously, as lavandin oils have not been subjected to extensive clinical trials. Medical applications of essential oils, e.g., treatment of carcinoma cells, are generally limited by the lack of their solubility in water. According to recent studies, this problem could be solved by introducing EOs as nano-formulations [115]. Lavandin EO, specially formulated as nanoemulsions, exhibits pronounced cytotoxic effects on human neuroblastoma cells, human lymphoblastic leukemia cells, and human colorectal adenocarcinoma cells [66]. It is worth emphasizing that despite the widespread use of lavandin, there is still not enough research to extend our knowledge about its medical properties, contrary to true lavender. Other Uses It is worth emphasizing that lavandin is not only a raw material to produce cosmetic ingredients. Lavandin oil is used as a natural flavorant in baked goods, frozen dairy, gelatin, soft candy, pudding, alcoholic beverages, and other food products [116]. L. x intermedia flowers are also a source of nectar from which bees produce honey. As might be expected, honeys derived from different lavender species show specific volatile compounds profile. Lavandin honey can be easily distinguished from the honey of other lavenders due to its high content of phenylacetaldehyde [117]. As a curiosity, lavandin is sold as a "medicinal plant for bees" or other pollinators such as bumblebees or butterflies and moths-it is considered a forage plant for pollinators [118][119][120]. Lavandin is also utilized in landscaping and agrotourism. Entrepreneurs started offering tourists opportunities such as visiting a lavender farm, where people can enjoy its beauty, learn about lavender cultivation, pick up bouquets, taste lavandin honey and ride horses in the area of lavender fields. Some farms also possess small-scale distillation units and present an opportunity to participate in the oil distillation process. Lavandin is even preferred here over true lavender due to its larger, decorative flowers that are very attractive for creating dry bouquets and are beautiful scenery for photographing and photo sessions [121,122]. L. x intermedia was also introduced in olive groves in Spain as a complementary crop that can help fight erosion, support biodiversity, and foster sustainable development. It was a result of the European Commission's Horizon 2020 project, Diverfarming, led by scientists from the University of Córdoba. The project addressed food security, sustainable agriculture silviculture, bioeconomy, and water management [123]. Scientists are trying to find other uses for LI. Essential oils are highly soluble in supercritical carbon dioxide, and therefore the supercritical fluid can be an effective fluid medium for impregnating these compounds into a polymer. Varona et al. created a supercritical impregnated n-octenyl succinate-modified starch with lavandin oil. The product obtained in this way may be used as a substitute for synthetic drugs in livestock [124]. This solution is considered for the production of drugs with controlled release of active compounds. In a review, Lesage-Meessen et al. indicated that distilled straws of lavandin (the byproduct of oil extraction) may be up-cycled as valuable raw material in biotechnological processes to obtain products with antimicrobial, antioxidant activity sought by the pharmaceutical and cosmetic industry [125]. Products based on lavandin are very common in our lives. It is possible that when we wash our hands with "lavender" soap or use "lavender" body lotion, we are using, in fact, products based on lavandin material. It is worthwhile to think about the importance of lavandin and give more credit to the "bastard", as it is unfavorably called. Conclusions In this review article, we comprehensively present and discuss the biological activities of essential oil and other active ingredients from Lavandula x intermedia (LI), as well as its current typical uses in industries and everyday life. Lavandin oil is primarily used in cosmetics. For therapeutic purposes, it is used in aromatherapy sessions, although, except for WHO, LI oil is not recognized officially as any medical agent. Thus, Lavandula angustifolia (LA) essential oil, a raw material acknowledged by Ph. Eur., receives the most attention from the scientific community and has been studied more extensively for its biological effect and possible use in therapy. LI and LA essential oils have a similar chemical composition, with some differences, mainly in camphor, borneol, and 1,8-cineole. This can potentially increase the risk of lavandin toxicity due to its higher camphor content and lead to some differences in oil biological activities, starting from the smell and ending with biocidal effects (similar or slightly stronger for lavandin). This paper summarizes all reported biological activities of LI, including antioxidant, biocidal, anxiolytic, neuroprotective, antithrombotic, immunomodulatory, and analgesic effects. In studies, when the L. x intermedia and L. angustifolia or even Silexan were compared, the effects were usually similar. This implies that L. x intermedia possesses similar therapeutic potential as its parent species. However, it cannot be definitively confirmed as the research is scarce. Currently, there is not enough evidence, and more studies are needed in this area. Accordingly, the answer to the question stated in the introduction, "Is lavandin oil less valuable than true lavender oil in other non-perfumery applications?" cannot be answered with 100% certainty at this time. L. x intermedia is as good as true lavender for its biocidal action, similar in terms of antioxidant properties, and possibly possesses other therapeutic values as L. angustifolia. With all certainty, lavandin is superior to L. angustifolia in terms of yields of the essential oil. This practical aspect made lavandin a dominant lavender plant cultivated all over the world. Acknowledgments: The authors would like to express their sincere gratitude to Aneta Kucik for kindly reviewing the text and checking its linguistic accuracy. Conflicts of Interest: The authors declare no conflict of interest.
2023-03-30T15:21:53.925Z
2023-03-27T00:00:00.000
{ "year": 2023, "sha1": "efe779727be9958b4bf515b1cc01d3c24e436c5c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/7/2986/pdf?version=1679976673", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7303658748aa50a83c586639b72f9f7d3ac5e32b", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247222784
pes2o/s2orc
v3-fos-license
An Adaptive Human Driver Model for Realistic Race Car Simulations Engineering a high-performance race car requires a direct consideration of the human driver using real-world tests or Human-Driver-in-the-Loop simulations. Apart from that, offline simulations with human-like race driver models could make this vehicle development process more effective and efficient but are hard to obtain due to various challenges. With this work, we intend to provide a better understanding of race driver behavior and introduce an adaptive human race driver model based on imitation learning. Using existing findings and an interview with a professional race engineer, we identify fundamental adaptation mechanisms and how drivers learn to optimize lap time on a new track. Subsequently, we use these insights to develop generalization and adaptation techniques for a recently presented probabilistic driver modeling approach and evaluate it using data from professional race drivers and a state-of-the-art race car simulator. We show that our framework can create realistic driving line distributions on unseen race tracks with almost human-like performance. Moreover, our driver model optimizes its driving lap by lap, correcting driving errors from previous laps while achieving faster lap times. This work contributes to a better understanding and modeling of the human driver, aiming to expedite simulation methods in the modern vehicle development process and potentially supporting automated driving and racing technologies. I. INTRODUCTION Throughout motorsports' more than 125 years long history, the fundamental goal of all participants did not change: reaching the best racing performance among competitors, which ultimately requires engineering a race car that fits its driver well. In fact, Milliken & Milliken already stated in 1995 that "It is the dynamic behavior of the combination of high-tech machines and infinitely complex human beings that makes the sport so intriguing for participants and spectators alike" [1]. Hence, for modern vehicle development in professional motorsports, a good understanding and modeling of the human driver are crucial to further improve the performance of the driver-vehicle-system. At the same time, the human decisionmaking process during racing is extremely complex and thus difficult to model, as: 1) many influencing factors exist, *This work was supported by Dr The adaptive human driver model is developed using demonstration data from professional race drivers which was generated in this simulator. Subsequently, the driver model is evaluated using the same simulation environment, intending to support the future vehicle development process. 2) vehicle dynamics are highly nonlinear and race cars are usually driven at the handling limits, and thus difficult to control, 3) each driver exhibits an individual driving style, 4) the generalization and adaptation mechanisms are complex and difficult to incorporate in a driver model. While challenges 1-3 have been successfully addressed in recent research [2], [3], the problem of understanding and imitating the human adaptation process remains unsolved. With our work, we intend to identify and understand the most important adaptation and learning techniques mastered by professional race drivers, contribute to the modeling of driver behavior by developing two methods to mimic this behavior, and evaluate the proposed methodology within a realistic Human-Driver-in-the-Loop (HDiL) simulator as shown in Figure 1. A well-fitting driver model is not only useful for advancing autonomous driving and racing technology but also to support the modern vehicle development process. Such a model could considerably extend and improve full vehicle simulations, ultimately enhance the resulting vehicle performance and development efficiency, while being much less expensive compared to HDiL simulations. A. Problem Statement and Notation In order to model human race driver behavior we aim to learn a human-like control policy π M which maps the current overall state x, including vehicle state and situation on track, to the vehicle control inputs a = [δ g b] composed of steering wheel angle δ, throttle pedal position g and brake pedal actuation b. This policy should be able to robustly maneuver a race car at the handling limits while being similar to the unknown internal driving policy π E of human experts. At the same time, this expert policy is non-deterministic due to natural human imprecision and intentional adaptation, and able to generalize to new situations as, for example, new race tracks. In this work, we aim to approach the problem of modeling this behavior by: (a) identifying and understanding the most important adaptation and learning mechanisms through related work and an expert interview with a professional race engineer, (b) using these findings to considerably extend a data-based driver modeling approach, and (c) evaluating the developed methods using data from professional race drivers and a state-of-the-art motorsport simulation environment. Consequently, the resulting driver-specific control policy π M should be able to generalize to unseen tracks and exhibit certain adaptation characteristics as the human driver. B. Related Work As vehicle dynamics are well understood nowadays, different approaches with varying complexities are available to model the physics of a car in different driving situations [1], [5], [6], [7], [8]. Such vehicle models can be used to predict the driving behavior in standard maneuvers or to estimate the vehicle performance on a particular race track using lap time simulation approaches [7], [9], [10], [11]. However, individual human driver behavior, being an important component of the vehicle-driver-entity, is often not sufficiently considered by these methods. This fact encourages motorsport teams to utilize HDiL simulation approaches, where the real driver operates the vehicle within a realistic simulator environment, facilitating faster prototyping and more realistic predictions of the true vehicle performance [12]. A variety of related work describes car racing from the driver's perspective, analyzes racing techniques, driving lines, and the complex decision processes in greater detail, and contributes to a better understanding of the human driver in general [13], [14], [15], [16]. Nevertheless, the task of modeling this behavior remains highly challenging. A number of approaches for building a virtual driver for different use cases mainly rely on conventional control architectures with a limited number of parameters [17], on model predictive control [18], and on exact linearization [19]. Some recently developed methods utilize machine learning techniques to imitate human drivers based on demonstration data: using supervised learning, random forests were trained to predict car control inputs from basic vehicle states [20] and it was shown that a feedforward neural network is able to track a driving line generated by a human [21]. Furthermore, methods based on reinforcement learning like the Generative Adversarial Imitation Learning (GAIL) framework [22] were utilized to mimic drivers in a highway driving scenario [23], and were extended to imitate human behavior in a short-term race driving setting based on visual features [24]. Besides that, research on personalized driver modeling is available that targets specific human individuals [25], [26], [27]. However, these models do not consider human adaptability and are partially limited to only model steering as a car control input. The Probabilistic Modeling of Driver Behavior (ProMoD) framework was demonstrated to be capable of completing full laps with a competitive performance by mimicking professional race drivers in a realistic HDiL simulation environment [2], [3]. The data-based and modular approach learns distributions of driving lines represented by Probabilistic Movement Primitives (ProMPs) [28], [29] and trains a recurrent neural network on human race driver data in a supervised fashion. Related to this, there seems to be a shift from linear and timeinvariant models of human manual control to nonlinear and time-varying approaches that are apparent in current research trends [30]. In particular, adaptation over time is identified as a key aspect of human behavior that should and can be modeled by moving towards time-varying models. While the ProMoD framework is shown to work well in many situations, it is still lacking the functionality of a time-varying model, i.e., the ability to learn driving on unknown tracks and to adapt and learn from gathered experience from driven laps. As such learning and adaptation aspects play fundamental roles in competitive motorsports, any robust and accurate driver modeling approach should be able to reflect them. Human adaptation behavior w.r.t. adaption times for changing road types in a driving simulator is analyzed, yet not modeled in the work of [31]. Past research on modeling driver adaptation to sudden changes of the vehicle dynamics takes into account limb impedance modulation and updating of the driver internal representation of the vehicle dynamics [32]. However, this related work focuses exclusively on the lateral dynamics with a first-principles approach without a superordinate objective such as lap time. With our work, we considerably modify and extend ProMoD to model human driving adaptation -to the best of our knowledge, for the first time in the racing context. Due to the modular architecture, the driving policy is adapted in a transparent manner. We contribute to a better understanding of human race driver behavior, considerably enhance the quality of a modern driver modeling approach, and aim to pave the way for more accurate vehicle simulations and, potentially, future autonomous racing. II. METHODOLOGY As a proper understanding of the human race driver is fundamental for modeling its learning techniques, we ground our methodology on key insights from literature, supplemented by findings of an expert interview with a professional race engineer 1 for LMP1 2 race cars. The adaptation principles identified in Section II-A are followed by a short summary of the recently presented ProMoD driver modeling framework in Section II-B. In Section II-C we present a novel way to generalize the driver model to new tracks. Finally, Section II-D introduces a new method to optimize driving similar to a real race driver based on experience from previous laps. A. Adaptation Principles Race drivers constantly pursue better racing performance in the presence of new tracks and modified vehicle setups. In this section, we aim to understand the most important principles for the adaptation behavior of race drivers. We gather the following key insights from literature, extended with an expert interview 3 of a professional race engineer in Appendix V: Objective: (delta) lap time: Drivers aim to drive as fast as possible and minimize the lap time in order to win races [33]. Hence, drivers tend to pay attention to their delta lap time, i.e. relating the current lap to the previous or the best lap. 3 Any modifications to the vehicle setup or environmental influences are handled as disturbances by adapting the control policy. 3 Risk awareness: Race drivers are particularly risk-aware and constantly test for the vehicle limits [16]. Furthermore, they aim to optimize performance by starting from a safe region and improving their driving incrementally. 3 Hierarchy: Brake points and speed profile are related hierarchically in a sense that, starting from anchoring and shifting the brake points, the corner-entry speed follows as a consequence and influences the performances of the entire corner [14], [15]. Finally, the driving line follows from the speed profile and the driver tries to control these three aspects in the same hierarchical order. 3 Initialization -Driving on new tracks: When starting on a new track, drivers tend to compare all new situations and corners to their experience from other tracks [14], [15]. This information is used to get an initial guess of reasonable brake points and driving lines, which is subsequently refined. 3 The initialization of brake points begins already before starting to drive, while the speed profile and driving line is initialized during the first few laps. 3 After initialization, drivers are able to complete the lap with a close to competitive lap time. 3 Iteration -Adaptation rules and quantities: The general adaptation strategy seems to be similar for all drivers, where adaptation of the braking, i.e. brake points and peak brake pressure is particularly important. 3 By fine-tuning them, drivers manage to achieve better performances. 3 To summarize and simplify the problem, we set up the following qualitative model: Race drivers optimize delta laptime as a function of brake points, peak brake pressure, and other variables as visualized in Figure 2. This function is parameterized through the vehicle setup. To solve this problem, the brake points variable is initialized in the "Preparation" phase in a safe region, i.e., such that the lap can be completed. Speed and driving line are initialized in hierarchical order during the "Warm-Up" phase. Afterward, drivers iteratively adapt and try out changes on all three hierarchical levels during "Fine-Tuning". Eventually, they arrive close to the optimizer shown as a star on the top of Figure 2. This point usually lies close to the boundary of the safe set, as the driver will be operating the vehicle at the limits of handling. B. ProMoD The recently presented ProMoD framework combines knowledge and ideas from both, race driver behavior and autonomous driving architecture. It consists of multiple modules as visualized in Figure 3, where each of these modules represents fundamental steps in the decision-making process of a human race driver. [2], [3] In the following, the existing architecture is shortly summarized in order to provide a proper foundation for the subsequent development of the novel generalization and adaptation methods: Global Target Trajectory: Every driver keeps a mental image of the whole race track in his head, knowing approximately where to brake, to turn in, and to accelerate again in each corner. However, this imagined driving corridor is not precise, i.e. it incorporates some variance, and additionally changes over time with gathered experience. Hence, we model the global target trajectory with a distribution over potential driving lines, which could be interpreted as a driving corridor, using ProMPs [28], [29]. For this purpose, both the spatial and the temporal information of every demonstrated driving line on a particular track should be projected to a lower-dimensional weight space. We define a series of equally distributed Radial Basis Functions (RBFs) with function index j ∈ {1, 2, . . . , N BF }, track distance s, constant width h, and c j being the equally distributed centers for a total of n variables that the trajectory consists of, with Φ s = Φ s,v1 = · · · = Φ s,vn . The weight vector is derived using ridge regression for each demonstration trajectory τ s,i ∈ R nNs×1 and regularization factor . By fitting a Gaussian distribution N (µ w , Σ w ) over the N demonstration weights with mean µ w and variance Σ w we are able to describe the distribution of driving lines for a driver on a particular track efficiently. Subsequently, an arbitrary number of new driving lines which are similar to all demonstrations can be generated by sampling a weight vector from this distribution w * ∼ N (µ w , Σ w ) and using to retrieve a new driving line in the original formulation which could be subsequently used for simulation. Local Path Generation: For any situation on track, a human driver continuously plans the upcoming path a few seconds ahead. We use this module to mimic the path planning by calculating constrained polynomials and multiple preview features 4 based on the current vehicle state and a sample from the previously constructed driving line distribution. These local path features are denoted as x LP . Perception: In addition to the path planning features, each driver relies on additional information about their surroundings, such as visual information or experienced accelerations. These perception features, which mostly relate to basic vehicle states, are gathered inside this module and are denoted as x P . Action Selection: The action selection process, i.e. the mapping from the current situation on track (as described by the feature set x = [x LP x P ]) to human-like control actions a, is learned using a recurrent neural network. This neural network is trained on all available demonstration data for a particular driver, aiming to imitate its individual driving style and incorporating the dynamics of the action selection process. This architecture now allows to directly adapt the driver model on different levels in a transparent manner. In contrast to end-to-end learning, a black-box behavior is avoided in order to increase interpretability. In the following, we present methods to generalize and adapt this driver model in two different phases. First, the 'Track Generalization" is introduced in Section II-C to address the "Preparation" and "Warm-Up" steps identified from the interview (see Figure 2). Afterward, the iterative "Fine-Tuning" is modeled by "Feature Adaptation" in Section II-D. The overall adaptation process is visualized in Figure 2. C. Track Generalization: Generate Driving Line Distributions In order to generate first laps on a new, yet unknown track, it is required to learn a reasonable driving line distribution for the Global Target Trajectory module. All other modules of ProMoD are track-independent by definition and remain unmodified. Hence, we aim to estimate a reasonable driving line distribution for this new track only based on the known track borders and on available experience from other tracks. Inspired by the results from Section II-A, we propose the methodology described in Algorithm 1. We utilize a novel ProMP description, conventional methods to fit driving lines based on geometric boundaries, and a method to estimate the variance of the driving line around the track based on experience from other tracks. ProMPs on Demonstration Data: As drivers utilize their existing experience during familiarization with a new track, we are required to encode this knowledge in a reasonable way. For this purpose, we use all available driving line data from all known tracks D and calculate ProMPs with a modified representation as driving line distributions for each track separately. Hence, we take the time-based vehicle positions Algorithm 1 Estimating a driving line distribution + sampling in the inertial reference frame for all laps on this track and map them to a curvilinear description for each track. Thereby, dy represents the lateral deviation from a reference line and κ the line curvature, both based on the reference line distance s. While the information in dy and κ is partially redundant, both formulations are required for subsequent calculations. By using equidistant samples from dy and κ and equidistantly spaced RBFs, it is now possible to project the driving line from each lap to a lower-dimensional weight space using ridge regression, resulting in weight vectors w dy and w κ . Assuming a Gaussian distribution, we retrieve mean weight vectors µ κ w , µ dy w and variances Σ κ w , Σ dy w to describe the distribution of all available driving lines on a particular track. By iterating this process for all available tracks, we can aggregate all driving line information into µ κ,dy w , Σ κ,dy w . In the following, we estimate a driving line distribution for a unknown track by combining this variance information with a conventional path planning method. Generate Mean Driving Trajectory: We start by estimating a mean driving trajectory which is only based on the known track boundaries B left and B right . As the generation of a reasonable and collision-free path around the track is required, we decide to use Elastic Bands [34], [35]. While being computationally efficient and easy to interpret, this method showed to produce reasonable driving line estimates with sufficient accuracy. The resulting trajectory is now assumed to be the reference and the mean driving line for the new track. As it is initially represented in the inertial space x (s), y (s) we subsequently project it to the curviliniar space κ (s). Similarly to the ProMP calculation on the available demonstration data, the curvature κ (s) is finally projected to the lower dimensional weight space and assumed to be the mean curvature µ κ w with µ dy w = 0 by definition. Variance Estimation: Using this mean trajectory and the existing corner information from other tracks, we estimate the variance with a sliding window approach. For this purpose, we are moving along the estimated mean driving line's curvature κ (s) and compare the current situation, described by a sequence of curvatures, to all situations on all known tracks as encoded in µ κ,dy w , Σ κ,dy w . By finding the most similar corner measured by the absolute difference between curvatures, we are now able to iteratively build Σ dy w , which describes the variance of driving lines on the new track. 5 Sampling and Reconstruction: Using the initially estimated mean driving line described by x (s), y (s), and the modified ProMP defined by mean µ dy w = 0 and covariance Σ dy w which describes the distribution of lateral deviations from this mean line, we are now able to sample an arbitrary number of new driving lines for the new track. For this purpose we draw a sample weight w * dy i ∼ N µ dy w , Σ dy w and retrieve the lateral deviation dy * i (s) through reconstruction with Φ s,dy w * dy i . Now, it is possible to construct the sampled driving trajectory in the Cartesian space using where φ * i corresponds to the heading angle of the vehicle, with φ * i = 0 when the vehicle drives purely into x-direction. Speed Profile: In addition to the trajectory of the vehicle, ProMoD requires a speed profile for the Local Path Generation module. Since this velocity profile is dependent on the vehicle and its setup, and hard to estimate using the available demonstration data, we follow a more robust approach based on vehicle dynamics. For each sampled vehicle trajectory x * i (s) , y * i (s), we utilize a conventional lap time estimation approach based on the vehicle performance envelope P to retrieve an approximate speed profile [7], [9]. Simulation: The sampled driving lines with corresponding speed profiles can now be used to reconstruct the original ProMP formulation within the previously presented ProMoD framework. An initialization with a reduced performance envelope P represents the "Preparation" phase on a new track and allows to safely simulate first laps. By iteratively expanding P and simulating the resulting driving lines and speed profiles, ProMoD is able to cautiously approach the vehicle limitations, aiming to mimic the "Warm-Up" phase. The complete process facilitates simulations on a new track where no demonstration data exists, enhancing our driver modeling framework with track familiarization abilities to generate first fast laps. However, as a human driver continuously optimizes its performance when being familiar with a track, ProMoD should also be adaptable and learn from experience, as shown in Section II-A. D. Feature Adaptation Professional race drivers master the skill of continuously optimizing their performance by analyzing past laps and adapting their driving behavior accordingly. With an additional feedback loop shown in Figure 4, ProMoD is enabled to mimic this learning process to a certain extent and achieve self-adapting behavior. By only adapting the global target trajectory, which is used to compute local path planning features x LP , the behavior of ProMoD can be influenced. At the same time, ProMoD maintains its ability to imitate human drivers without yielding super-human performance as the action selection module remains unchanged. In the following, we will use Conditioning and Scaling to effectively vary the trajectory while keeping it human-like: Conditioning: Recall that the ProMPs for the global target trajectory are represented by a Gaussian weight distribution p(w) ∼ N (w | µ w , Σ w ) with mean weight vector µ w and covariance matrix Σ w . We are now able to alter this distribution by conditioning the prior distribution to a new observation x * s = {y * s , Σ * y } at a specific location s = s , as presented in [29]. Here, y * s ∈ R n×1 is an algorithmically chosen target state of the vehicle position and velocity to be reached at distance s , and variance Σ * y ∈ R n×n is the confidence of this new observation. The conditional distribution p (w | x * s ) remains Gaussian with updated parameters where relates the variances of the prior distribution and the new observation with Ψ s ∈ R nNBF×n representing the value of all basis functions at s = s . [29] This procedure allows to move brake points or to shift apexes 6 by conditioning the prior distribution utilizing a set of rules derived from Section II-A. However, when using the prior variance without further consideration, conditioning at a specific turn potentially affects distant turns due to nonzero covariances in the data, as shown for Σ ∆t∆t in Figure 5 (a). As such a large effect across multiple turns would not be considered human-like, we aim to reduce it by masking the original matrix using a factor matrix F k ∈ R N BF ×N BF shown in Figure 5 (b). By multiplying F k element-wise with Figure 5: Masking the covariance matrix: (a) Part of the covariance matrix for a single variable (Σ ∆t∆t ∈ R N BF ×N BF ), where brighter colors indicate higher covariances. Far-off-diagonal correlations in the data potentially result from different vehicle setups in the demonstration data but are difficult to consider during conditioning. (b) Factor matrix for a single variable, where the elements on the diagonal are one, and off-diagonal entries are fading out to zero using bandwidth k. Here, k is selected such that distant and non-consecutive turns can not mutually influence each other. (c) Resulting matrix Σ mask w ∈ R nN BF ×nN BF for three variables after masking, filtering out correlations over larger distances. each submatrix of Σ w , we retrieve a masked matrix for conditioning which effectively lowers the influence on distant regions as shown in Figure 5 (c). This matrix could now be used for effective local Conditioning. Scaling: In order to fully utilize the vehicle's potential on straights the speed profile can be adapted to influence the throttle actuation and braking behavior of ProMoD. Since the neural network works to some extent like a trajectory-tracking controller, whose goal is to keep the control error between the reference speed and the actual speed as small as possible, its output signals tend to fluctuate during intervals of full throttle. Therefore, if the actual velocity is larger than the reference velocity, ProMoD tends to accelerate less, even if the virtual driver is on a straight line and is expected to drive as fast as possible. This problem can be effectively solved by smoothly scaling the reference speed on long straights. Adaptation Process: The complete adaptation process, shown in Algorithm 2, is inspired by the insights from Section II-A and uses both introduced methods, Conditioning and Scaling, to continuously adapt ProMoD based on gathered experience. While the original version of ProMoD simply samples and simulates multiple driving lines from a single distribution, we will now use the variance information for a Algorithm 2 Adaptation Process end if end for continuous and targeted adaptation. After simulating a lap, an initial check is done whether the lap was completed successfully. If the simulation ended before completing a full lap, the situation where the vehicle left the track is analyzed and the ProMP is conditioned using two sub-procedures: • Driving-line check and adaptation: As illustrated in Section II-A, the turn-in is the most important phase during cornering. Hence, the driving line is compared to the permissible driving corridor, represented by track borders or by the envelope of all demonstrations from the human drivers, and the largest deviation before the apex is found. Then, a new observation y * s is added for Conditioning at this position, shifting the driving line distribution towards the permissible area. • Velocity adaptation: If no valid adaptation is found or extreme tire slip occurs, the target velocity will be reduced. In practice, ProMoD can eventually complete each critical corner when the target speed is low enough. Subsequently, the completed laps can be further adapted to improve the lap time and to keep the driving line in the envelope by: • Checking and reducing smaller deviations from the permissible driving corridor: Just like in the real race, ProMoD sometimes slightly exceeds the theoretically allowed driving corridor but still manages to complete the lap. These situations are checked and additional control points are introduced for Conditioning. • Checking acceleration intervals and Scaling of the speed: As discussed before, ProMoD partially does not utilize the full vehicle potential during acceleration phases on straight lines. Hence, speed scaling is used to further increase the performance on already completed laps. By introducing this process, we are able to encourage ProMoD to learn from the experience from previous laps, to correct mistakes, and to increase performance. III. EVALUATION In this work, we use data of professional race drivers generated with the HDiL simulator shown in Figure 1 to train and evaluate our driver modeling approach. All rollouts of our driver model are simulated using the same in-house developed vehicle model of a high-performance race car, guaranteeing realistic vehicle dynamics and facilitating comparability to the human demonstrations. The task of driving the simulated race car is highly challenging as the car only uses a Traction Control as driver assistance system. In order to safeguard intellectual property, all plots in this paper are shown normalized. Furthermore, all driver names are anonymized. A. Track Generalization We evaluate the presented track generalization method of our ProMoD framework on two different race tracks, Motorland Aragón (AGN) and the Yas Marina Circuit in Abu Dhabi (ABD), and leave out the correspondingly available demonstration data in the training procedure of our driver model. We start by comparing the predicted driving line distribution to the corresponding driving lines of the human driver on AGN in Figure 6. It is visible that the generated driving line distribution, despite not being completely driverspecific and showing some small variations, approximately fits the driven paths of the real driver. When using these driving line distributions for simulation on unknown tracks, ProMoD is capable of completing full laps on the respective race track, as visualized in Figure 7 for ABD. We compare five laps from the human driver (dark grey) to five laps from the track generalized ProMoD framework (red) on the identical vehicle setup. (a) shows a comparison of the driver actions and the resulting speed profiles over the normalized track reference distance. Here, ProMoD is able to approximately reproduce the throttle, braking, and steering activity of the real driver considering the braking points, actuation speeds, and amplitudes. The velocity profile shows some small deviations after the first corner where ProMoD does not fully utilize the vehicle potential due to a slightly too conservative speed profile estimation in this region. (b) visualizes the resulting simulated driving lines around the track (light grey). The position of the start/finish line and the driving direction is indicated by the bright blue triangle. Here, ProMoD approximately follows the demonstrations of the human driver despite they were not used for training. Some deviations are present at particularly challenging locations (e.g. the hairpin corner on the left side), which, however, do not prevent ProMoD from finishing the lap with a reasonable performance. These deviations may be reduced using adaptation methods to learn from the gathered experience on the track. Here, ProMoD is able to finish complete laps on the unknown race tracks being less than 0.5% slower than the human driver in the median, but partially already achieving competitive times on particularly fast laps. The slightly slower median lap time might be a result of a yet non-optimal speed profile or driving line distribution. For AGN, the track generalization method achieves comparable results considering the similarities of the resulting driving line and driver action distributions with the human driver. Furthermore, we compare the performances of ProMoD and the real driver on both tracks with equal vehicle setups. Figure 8 visualizes the resulting lap time distributions in a box plot, respectively normalized to the median lap time of the human driver on each track. Here, ProMoD is able to achieve lap times close to the real driver, with a slightly increased median due to small deviations in the expected speed profiles. B. Feature Adaptation The feature adaptation process is tested on two different tracks, the Silverstone Circuit (SVT) and Motorland Aragón (AGN). We start with an evaluation of the local effects of Conditioning and Scaling by showing the executed adaptations, the resulting changes in the driven path, and the selected actions of the driver model. Subsequently, we are going to test the complete adaptation process on both tracks, showing that the method is able to increase the covered distance for previously unfinished laps, and to increase performance by reducing lap time without losing control. Local Effect -Adaptation: The local effects of adaptation are presented in Figure 9 and Figure 10, visualizing adaptations of the driving line and the speed profile, as well as the resulting action signals and driven paths. Here, ProMoD fails initially at Turn (T) 6/7 of the SVT racetrack due to considerably exceeding the vehicle potential as shown in Figure 9 (b). In order to adapt the speed profile effectively, three control points are used to set the lower peak speed value, resulting in earlier braking and consequently helping to avoid the mistake and pass the turn. At the same time, with the purpose of reducing the curvature at corner entry, the driving line is pulled outwards around fifty meters before the first apex as shown in Figure 9 (a). After two iterations of simultaneously adapting both, the speed profile and the driving line, ProMoD succeeds in passing this turn. Local Effect -Scaling: Scaling is particularly useful on straights, if ProMoD initially does not fully utilize the vehicle potential due to a modified vehicle setup and a too conservative prior target speed definition. Its effect becomes apparent when observing the accelerator actuation signal. With a higher reference speed, the model tends to utilize full throttle more often on long straight lines, as shown in Figure 11. As a consequence, the fluctuation of the throttle signal on those intervals are effectively eliminated, and the lap time is improved by about 0.2 seconds. Adaptation Process: The developed adaptation process for ProMoD has been successfully tested on SVT and AGN racetracks as visualized in Figure 12. While it only requires 4 iterations to complete SVT, ProMoD needs more iterations for AGN since it fails at more locations. On both tracks, ProMoD succeeds in completing a lap after less than 20 iterations, with at most five iterations for a problematic turn. speed Figure 10: Adaptation of the target speed profile for T6/7 on SVT: This figure visualizes the target speed, as well as resulting vehicle states and driver actions over the normalized segment distance before and after adaptation. By using three control points the method is able to adapt the target speed profile effectively while preserving its general shape. The balance plot indicates the current driving state: Positive values relate to understeering, i.e. situations where the vehicle "plows" and turns less than a neutral steering car. Contrarily, negative values indicate oversteering situations, i.e. the limits of the rear axle are exceeded and the vehicle might start to spin if the driver does not counter steer. Before adaptation and at normalized segment distance 0.25, the vehicle gets in such an oversteering situation, but ProMoD is able to counter steer and recover the vehicle at the cost of losing speed. However, at distance 0.65 ProMoD largely exceeds the vehicle potential, resulting in a slide over both axles which forces the vehicle off the track (see Figure 9 (b)). After the adaptation of the speed profile and the driving line, ProMoD is able to keep the vehicle on the track without exceeding its potential, considerably reducing critical situations. Here, ProMoD uses an increased braking force during the first turn in, later acceleration, and earlier throttle lift and braking for the following turn. Those changes contribute to the success of the adaptation and ProMoD passes the turn after two iterations. IV. CONCLUSION In this paper, we present new insights into the general adaptation behavior and the learning processes of professional race drivers and derive new methods to extend ProMoD, an advanced modeling method for driver behavior. With the purpose of understanding driver behavior in general and identifying the most important adaptation processes, this work starts with key insights from related work and an expert interview with a professional race engineer. Based on the hereby acquired knowledge, we develop a novel method that can estimate human-like driving line distributions for unknown tracks. These distributions could be used to simulate complete laps with almost competitive performances and human-like driver control inputs in a professional motorsport driving simulator. Subsequently, we present a feature adaptation method that allows ProMoD to learn from the gathered experience of previous laps. Using different experiments, we demonstrate the ability to continuously learn from mistakes and to improve driving performance. This work contributes to the modeling and a better understanding of driver behavior, paving the way for advanced fullvehicle simulations with consideration of the human driver and potentially future autonomous racing. Due to its modular architecture, ProMoD could be extended in various ways in future research. Besides additional methods for feature adaptation and optimization, the neural network of the action selection module could be adapted to learn from experience using reinforcement learning techniques, or real track data may be used to provide more demonstration data. Furthermore, human-like qualitative feedback, which is based on encountered problems during driving, could help to further support the vehicle development process. In addition, our driver model may be extended to a multi-agent environment with opponents on the race track, facilitating a more accurate prediction of true racing performance and potentially optimization of complete racing strategies. Finally, ProMoD might be applied to other similar use cases, with the target of modeling human behavior in dynamic environments with small stability margins. V. APPENDIX -EXPERT INTERVIEW Is there a universal adaption rule that applies to all drivers and tracks? Indeed, it turns out that adaption strategies are very similar across different drivers, tracks, and vehicles, in spite of the individual driving behavior, the various layouts of the tracks and the continuously modified vehicle setups. The driver's main goal is to 'brake as late as possible, and accelerate as early as possible'. The resulting driving line, the turn-in, and the on-throttle behavior are seen as a consequence of pursuing that goal. How do drivers drive their first laps on a new track? When faced with a new track, what a driver would do can be divided into three phases: preparation, warm-up, and the subsequent fine tuning. • Preparation. Drivers come to a new track with a memorized 'database of corner information', collected from their prior experience, simulator sessions, statistical data, etc. First, drivers characterize each new corner by comparing it with those in their memory, and assemble a first guess of the driving line. Since every corner is unique, this first guess is usually a rough approximation. At this point, it is helpful to consult other drivers to improve the initial guess. Finally, they set brake points, utilizing signs in the environment such as brake markers. Having concretized all prior information and exchanged opinions with fellow drivers of specific positions for hitting the brake pedal, the drivers start their first laps on a new track. • Warm-up. Race drivers are particularly talented in assessing risk. They usually start off with a slow and safe speed profile, which they adapt from lap to lap to higher velocities. This process can take very few iterations. For example, one driver managed to reach a competitive lap time on the Le Mans circuit surprisingly after only five laps. • Fine Tuning. After warming up, drivers are able to complete the lap with a close to competitive lap time, which they then try to improve incrementally. Usually, drivers do not reach a global optimum, but are aware of how to improve. High-and changing-speed corners are the most difficult ones, where spinning should be prevented, as it is extremely difficult to control. Which quantities do race drivers adapt and how? Do they pay attention to specific metrics? Although the goal of improving lap time is sound and clear, the real optimization process is indeed very complicated, and many factors have to be taken into consideration. The following three aspects are most critical during optimization: • Delta lap time. The adaption behavior of race drivers is result-oriented. They are not paying much attention to the exact speed values at local points around the track, but rather to the lap time difference to the previous or best lap. The association with the optimization problem is visualized on the top of Figure 2. • Brake point. Hitting the brake is where the corner starts. It is the most crucial tuning knob, not only because it influences the speed profile, but also since it is the source of any issues arising throughout the following corner. I.e., all issues should be traced back to the brake point, and cannot be locally analyzed. • Peak brake pressure. The driver attempts to predict the future state of the car when making decisions. In the presence of slip, however, uncertainty about the vehicle state is introduced, eventually leading to wrong predictions by the driver. Therefore, slip management is crucial during cornering, with the maixmum brake pressure helping to anticipate imminent slip. How do race drivers behave when the vehicle setup is modified? Will they pre-adapt their strategy according to the setup? It is extremely complicated to analyze the car and the behavior of the driver simultaneously. Therefore, when new vehicle setups are tested, the drivers do not and are not expected to have much idea of what has been adapted on the car. Sometimes, race engineers would do blind-tests in order to isolate the influences of the modified setups from those of the drivers.
2022-03-04T06:47:20.322Z
2022-03-03T00:00:00.000
{ "year": 2022, "sha1": "50bf2bdd0ae5702b327cad371f6914ab156dd78d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bce36192f98daf2ff784ef4a315594555bf1d8db", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
121721
pes2o/s2orc
v3-fos-license
Fermion family recurrences in the Dyson-Schwinger formalism We study the multiple solutions of the truncated propagator Dyson-Schwinger equation for a simple fermion theory with Yukawa coupling to a scalar field. Upon increasing the coupling constant $g$, other parameters being fixed, more than one non-perturbative solution breaking chiral symmetry becomes possible and we find these numerically. These ``recurrences'' appear as a mechanism to generate different fermion generations as quanta of the same fundamental field in an interacting field theory, without assuming any composite structure. The number of recurrences or flavors is reduced to a question about the value of the Yukawa coupling, and has no special profound significance in the Standard Model. The resulting mass function can have one or more nodes and the measurement that potentially detects them can be thought of as a collider-based test of the virtual dispersion relation $E=\sqrt{p^2+M(p^2)^2}$ for the charged lepton member of each family. This requires three independent measurements of the charged lepton's energy, three-momentum and off-shellness. We illustrate how this can be achieved for the (more difficult) case of the tau lepton. Introduction Why Nature has laid down exactly three fermion families in the same representation of the Standard Model's gauge groups with universal couplings to the gauge bosons remains to be explained. Many ideas have been presented in the literature and we quote some to exemplify the farreaching implications of any experimental progress in the first question of what has been called the "Fermion problem". "Democratic approaches" [1] are in general based on the idea of equal fermion-to-Higgs Yukawa couplings. One then needs mechanisms to generate specific lepton or quark mass patterns, see for example [2]. Let us also recall the classic work of Froggat and Nielsen [3] that introduces the concept of "horizontal flavor" symmetry (nowadays often called Interfamily symmetry) by which the three families are degenerate or quasi-degenerate at some high scale beyond present experimental reach, due to unknown symmetry. At lower energies dynamical (renormalization group) effects amplify any small symmetry breaking term causing the ratios between masses that we see in current accelerators. In essence, the idea is that each of the elementary particle types, for example the charged leptons (e, µ, τ ) provide a representation of the posited interfamily symmetry. Since the symmetry is broken at the TeV scale and below, revealing it requires theoretical extrapolation a Contact email fllanes@fis.ucm.es from current experimental data. The same authors have followed-up with ideas based on Anti-grand unification [4]. An alternative line of research assumes that the known elementary particles are really composite objects of more fundamental "preons" [5]. To date no experiment has revealed composite structure beyond the usual layers of quantum pairs involved in radiative corrections around seemingly point-like sources. One can argue that a new strong interaction binds preons and this might conceivably be revealed in future experimental efforts. Further, dynamical breaking of chiral symmetry as a genuine quantum effect is well known in strong interaction physics [6,7] and has also been invoked in flavor physics as an alternative to the tree-level Higgs mechanism, for example by Brauner and Hosek [8]. However no one seems to have paid attention to the fact that the Dyson-Schwinger equations do provide one with several solutions depending on the strength of the coupling. In this work we call attention to this point and its potential relevance for the fermion problem. Each of the three fermions corresponds in this hypothesis to a quantum over a different vacuum (but in the covariant formalism one pursues the study of correlation functions, here propagators, and sidesteps the issue of the vacuum wave functional). Far from attempting a complete theory, we will show the general features of the mechanism within a simple model of a fermion field coupled to a scalar boson through a Yukawa coupling. From this prospective, different flavored fermions, one corresponding to each family, are quanta of the same field, so the lagrangian can be more economically written. Each fermion is an elementary one-particle excitation on top of a vacuum that is a local minimum of the Hamiltonian. The excitations over the ground-state vacuum provide the lightest fermion family. Finally, at fixed coupling g, the Dyson-Schwinger equation has a finite number of solutions, so a finite number of families arise. The question on why three families becomes only a question of the value of g, the interaction coupling, and may have no special significance. The equivalent of the propagator Dyson-Schwinger equation in a non-covariant framework is the well studied mass gap equation of potential models of QCD. The equation has been numerically solved for the harmonic oscillator potential [9], [10], the linear potential [11], and the linear plus Coulomb potential [12,13]. Bicudo, Ribeiro and Nefediev have systematically studied the excited solutions of this equation [15]. Our results seem compatible with this prior work. Dyson-Schwinger equation and its numerical solution We examine a simple Yukawa theory for one fermion field coupled to a real scalar field with Lagrangian density in Euclidean space The Dyson-Schwinger equation for the fermion propagator in this theory is represented in figure 1 in the rainbow approximation (neglecting the vertex dressing). To examine generic features of dynamical chiral symmetry breaking we can further ignore the running of the wavefunction renormalization and set Z(p 2 ) = 1, a constant. This leaves one scalar equation for the fermion mass function that reads In the chiral limit m Ψ → 0 this equation becomes homogeneous and accepts a chiral-symmetry preserving solution M = 0 that continuously deforms into a soft-running mass form when m Ψ is not zero. But for strong enough kernels F it also admits other solutions that break chiral symmetry. The kernel in this simple Yukawa theory is with y the cosine of the polar angle in Euclidean fourdimensional space from p to q. To obtain excited solutions for M as function of p 2 we apply an iterative linear method with different initial guesses with or without nodes for M (p 2 ). This method proceeds by examining linear deviations of the exact solution . This can in turn be discretized (simply discretizing p) and cast as a linear system and solved with standard numerical linear algebra tools. The kernels require only the calculation of 2-dimensional integrals that are also standard fare in modern computers. By naive dimensional analysis one sees that the integral needs UV regularization. We accomplish this by a simple cut-off method. Then the parameters should be chosen to run with the cutoff g(Λ), m ψ (Λ) to ensure that the mass functions turn largely independent of Λ. In our results we display the dependence of the solution with g at fixed Λ. The boson mass m φ is chosen to be 1 and sets the unit of the theory. Numerical results and discussion In this section we present the chiral-symmetry breaking conventional and new excited state solutions. By running our computer program incrementing g we meet critical g values g 0 , g 1 , ...g n above which there are exactly n + 1 solutions, the last with n nodes, that we find numerically. For example the first solution appears for a critical g 0 ≃ 6.8 (of course this number runs with the cutoff). We show in figure 2 the three solutions obtained with g = 14, Λ = 200. Then in figure 3 we plot the nodeless solution at fixed cutoff incrementing g sequentially to show the dependence of the mass function on the coupling. Only the nodeless solution has been widely used in past literature for its applications in hadron physics. The others are sometimes rejected [16] on the basis of "wrong" UV asymptotic behavior, arguing that they do not match with conventional mass running in perturbation theory, and artificially imposing a boundary condition that discards all but one solution. We point out here that this mismatch poses no problem. One should interpret these solutions as the one-particle excitations above different extrema of the Hamiltonian or vacuum replicae. Therefore they cannot all be matched to the same perturbation theory around one particular vacuum. If one insists on viewing them from the one vacuum that connects smoothly to perturbation theory, then they have to be written down as complicated collective states in terms of the one-particle solutions over this vacuum. In the particular case of the Yukawa theory, the angular integral in eq. 3 can be performed analytically [14], but we prefer keeping a two-dimensional integral thinking of future work with more complicated model Lagrangians. In agreement with the findings of [15], we see that the higher excited vacua in this covariant Euclidean formulation present zeroes in the mass function. A version of the Sturm-Liouville theorem must be at work (this is not surprising since the Hamiltonian is hermitian). Also note that the values of M (0) generated in this covariant Yukawa theory show quite some hierarchy. In figure 2 one can see that they are well-spaced. Therefore this field-theory mechanism might conceivably be at work in the fermion flavor problem. However M (0) seems to be larger for the nodeless solution than for the excited ones (this puzzling property also appears in the equal-time approach of [15], and the extent of its model dependence needs to be further investigated). To confirm this point, we choose a different set of parameters and implementation of the computer code. With m φ = 100, a cutoff Λ = 10 4 implemented as a Gaussian fall-off of the integration measure (as opposed to terminating the grid), and g = 50, with a small bare fermion mass m ψ = 5 · 10 −4 , one obtains the three solutions depicted in figure 4. This graph confirms that the nodeless solution is higher in mass, and that it is possible to generate sizeable mass hierarchies between the solutions. We use this run to also illustrate the dependence of M 2 with the cutoff, displayed in figure 5. As can be seen, the excited solution does not decrease monotonically as g → 0, it rather seems to become a singular solution at the critical coupling. This technicality is however probably irrelevant for physical applications. In hadron physics the strength of the interaction being fixed by extensive phenomenology, there would be no freedom to vary g. In the physically relevant cases studied in [15], excited solutions exist at physical values of the coupling. In [17] Ribeiro and Nefediev have further extended their original results to include bound states (mesons) constructed with the replicated quasiparticles. In particular there are Goldstone bosons over the replicated vacuum, that are of course not massless pions but would appear as excited pion states (separated from the ground state pion by the mass gap between the two vacua). Whether any conventional mesons accept a more convenient description as replicated mesons instead of excited mesons over the standard vacuum remains an open question. Working in Euclidean space as we do provides fastly convergent integrals. This technicality may be avoided by employing a Lehmann representation [18] that allows a direct solution in Minkowski space with similar results. Experimental signature In a free field theory, a field quantum of mass m satisfies the equation that causes a pole in the free propagator In an interacting field theory this pole will be shifted due to renormalization from its bare to its physical position. The dependence on the renormalization scale can be traded by a dependence on the particle Euclidean 4momentum so that yielding the transcendental equation for the pole position Another way of visualizing the function M (p 2 ) of the interacting theory is to trade p 2 by the space-like part p 2 as argument of M , generating therefore a non-trivial dispersion relation for virtual particles, and the zeroes of the mass function M appear as points where E = |p| that would otherwise only be reached asymptotically at large p (see figure 7). Tests for a non-trivial (real particle) dispersion relation have been proposed and some carried out in the search for violations of Lorentz invariance [19]. However these are very indirect and usually performed at low energy, so an accelerator-based test of the (virtual particle) dispersion relation by separately measuring E, p and the off-shellness is preferable. In our case, real particles still provide representations of the Poincaré group and have constant, physical mass. However virtual particles off their mass shell will display the running mass function and this can be captured by analyzing the amplitudes of physical processes in perturbation theory. Dependence of M 2 with the coupling g them would present a node. In either case a zero is present in the mass function of one of the fermions. An interesting experiment is therefore a measurement of p, E and the off-shellness ∆ 2 = p 2 − M 2 (p 2 ) of a τ , at various momenta p. The B factories have accumulated large τ samples [23] that are under analysis and can be employed for this purpose. The process one may investigate is depicted in Fig. 10 in the appendix below. An electron-positron pair collides and annihilates into a τ − * τ + pair, where the τ − * is off its mass shell. Its four-momentum can be infered from the center-of-momentum energy of the incoming e − e + pair and by reconstructing the energy and momentum of the on-shell τ + from its decay products, as detailed shortly. Finally, one needs the off-shellness, but this can be obtained from the number of counts with given E, |p|. This is obvious since the cross section for a specific process where the off-shell τ − decays into for example l −ν l ν τ , depends on the off-shellness ∆ 2 of the τ − (see also App. A). Let us now detail the possible analysis: 1. In the center of mass frame of an e − e + collision one can identify two back-to-back hadron jets, tagging the flavor by demanding that one of them kinematically reconstructs an on-shell τ . This is rendered difficult by the undetected neutrino, that we assume to be the only unreconstructed track on the left side of the event. 2. Complete reconstruction of the energy and momentum on the left side of the event (see Fig. 8) is possible with a vertex detector (giving the direction of motion of the on-shell τ ). The total energy is taken from collider calibration, and matched to the energy of the visible tracks. Balancing energy provides the energy of the missing neutrino, and automatically the momentum of the left side, that can be tested against the hypothesis of a physical τ being produced. 3. Next one examines the right side of the reaction, where the secondary and primary vertex seem to coincide (as appropriate for a virtual, off-shell τ decaying rapidly). The total three-momentum is known a priori. One balances momentum to obtain the momentum taken by the undetected ν τ , and obtains the total energy. Now, it is not necessarily true that E 2 − p 2 = m 2 τ . 4. The number of counts with given E, p on the right side is normalized to a cross section in terms of the off-shellness σ(∆ 2 ) and used to obtain M from the value of the intermediate propagator in perturbation theory. An alternative possibility is to carry out the measurement on the expected O(20-30) τ events at the OPERA experiment [21] in Gran Sasso National Laboratory. Given the momentum of the neutrinos from the CNGS beam one can attempt full kinematical reconstruction of (E, p). However here the difficulty resides in the off-shellness since a small value will be forced for the identification of the τ . The experimental test for the muon and the electron is even simpler, for example through Compton and reverse Compton scattering where the fermion is off-shell in the intermediate state. For quarks one needs to take into account that they always appear in bound states, and therefore the mass function is always under an integral sign and reconstruction from experimental data difficult. However a particularly simple case is where the strangeness be completely tagged by counting all hyperons in both jets. In principle the same analysis carried out here could also be undertaken for Majorana particles but this is out of our scope in this preliminary work. Conclusions We have called attention to an interesting feature of field theory, namely the possibility of having several solutions of the one-particle Dyson-Schwinger equations for a broad Dispersion relation for a fermion with running mass Arbitrary units Fig. 7. The virtual dispersion relation at fixed renormalization scale for the solution with two nodes (we find similar result for one node). As the running mass actually has a zero, the dispersive curve touches the massless limit that would otherwise be reached only asymptotically. The spectrum of states on top of each of the vacua is called "replica" or "recurrence" and provides a mechanism that might be interesting for the theory of flavor. The number of recurrences arises from no special symmetry, since non-linear equations can perfectly have a finite number of solutions (again depending on the value of the coupling, this can be chosen to be three). These solutions have been shown in the past in a Hamiltonian framework [15] and we have minimally extended the results of these authors to show that the covariant formulation allows similar phenomena. The excited solutions have mass functions that present zeroes and this reflects in the virtual dispersion relation (feature of an interacting field theory that should not be confused with breaking of Lorentz invariance). Whether these solutions will be relevant to the fermion family problem can be tested at B factories employing their large τ samples directly produced. At this point it is a meaningless exercise to attempt to obtain the fermion masses in this scheme. Three data points M i (0) should be used to fit three parameters, the current fermion mass (that we have here set to zero), the boson mass, and g at a given cutoff, so the current predictive power is null. However we find this field theory mechanism still worth attention given the ease with which fermion coupling universality can be incorporated by having three recurrences of the same field. Testing the off-shell dispersion relation should allow to immediately discard the scenario so we believe we have presented a valid physical hypothesis. Of course, the success of radiative corrections in the standard model make one wish for a measurement at the highest possible energy, maybe at a future linear collider. This work has been performed in the framework of the research projects FPA 2004-02602, 2005-02327, PR27/05-13955-BSCH (Spain) and is part of the Masters thesis of Mr. Páramo Martín presented to the faculty of U. Complutense). TVC is a postdoctoral fellow for the Fund for Scientific Research -Flanders and acknowledges the support of the "Programa de Investigadores Extranjeros en la UCM -Grupo Santander". A τ -pair production in e + e − collisions A.1 On-shell cross section We look at the e + e − → τ + τ − process depicted in Fig. 9, where all ingoing and outgoing particles are on their mass shell. Using the conventions of Ref. [22], the amplitude for the depicted process is For the unpolarized cross section, one needs the squared amplitude averaged over initial and summed over final spins where the spinor indices are explicitly shown. For e + e − annihilation, one has Q 2 = s. One can now rearrange the factors in Eq. (12). Making use of the relations one can write Eq. (12) as a product of traces Neglecting the electron mass, this results in as where at high energies, the mass terms become negligible. In the center-of-momentum (com) frame, the amplitude is with θ τ the com-angle between incoming electron e − and outgoing τ − . The differential cross section in the com-frame is then given by with α = e 2 /4π. The total cross section is A.2 Half off-shell cross section In this part of the Appendix, we look at the e − e + → τ − τ + process, where one of the τ -particles is off its mass shell and decays into a lighter lepton l (electron or muon). We discuss the case for an off-shell τ − . The case for an offshell τ + is clearly completely analogous. The process is displayed in Fig. 10. The off-shellness of one of the τ 's changes the kinematics. The total energy in the reaction is unevenly divided: √ s is the total energy in the com-frame. The amplitude for the process depicted in Fig. 10 is The propagator of the off-shell τ − will give rise to a dependence of the amplitude on the off-shellness ∆ 2 = E 2 τ − − p 2 τ − M 2 τ − . In the limit of infinite mass of the W -boson and vanishing electron mass, the unpolarized squared amplitude for the process depicted in Fig. 10 is [24] The (differential) cross section for this process is proportional to the above squared amplitude, integrated over the threemomenta of the outgoing τ -neutrino and leptonantineutrino. This is a standard computation that can be incorporated in the computerr code if needed, yet it is clear that the resulting cross section will depend on the off-shellness ∆ 2 and the virtual τ mass-energy dispersion relation. In particular, at large off-shellness, the cross section behaves as 1/∆ 2 and the constant multiplying this parametric behavior can be fit to data. Introduction Why Nature has laid down exactly three fermion families in the same representation of the Standard Model's gauge groups with universal couplings to the gauge bosons remains to be explained. Many ideas have been presented in the literature and we quote some to exemplify the farreaching implications of any experimental progress in the first question of what has been called the "Fermion problem". "Democratic approaches" [1] are in general based on the idea of equal fermion-to-Higgs Yukawa couplings. One then needs mechanisms to generate specific lepton or quark mass patterns, see for example [2]. Let us also recall the classic work of Froggat and Nielsen [3] that introduces the concept of "horizontal flavor" symmetry (nowadays often called Interfamily symmetry) by which the three families are degenerate or quasi-degenerate at some high scale beyond present experimental reach, due to unknown symmetry. At lower energies dynamical (renormalization group) effects amplify any small symmetry breaking term causing the ratios between masses that we see in current accelerators. In essence, the idea is that each of the elementary particle types, for example the charged leptons (e, µ, τ ) provide a representation of the posited interfamily symmetry. Since the symmetry is broken at the TeV scale and below, revealing it requires theoretical extrapolation a Contact email fllanes@fis.ucm.es from current experimental data. The same authors have followed-up with ideas based on Anti-grand unification [4]. An alternative line of research assumes that the known elementary particles are really composite objects of more fundamental "preons" [5]. To date no experiment has revealed composite structure beyond the usual layers of quantum pairs involved in radiative corrections around seemingly point-like sources. One can argue that a new strong interaction binds preons and this might conceivably be revealed in future experimental efforts. Further, dynamical breaking of chiral symmetry as a genuine quantum effect is well known in strong interaction physics [6,7] and has also been invoked in flavor physics as an alternative to the tree-level Higgs mechanism, for example by Brauner and Hosek [8]. However no one seems to have paid attention to the fact that the Dyson-Schwinger equations do provide one with several solutions depending on the strength of the coupling. In this work we call attention to this point and its potential relevance for the fermion problem. Each of the three fermions corresponds in this hypothesis to a quantum over a different vacuum (but in the covariant formalism one pursues the study of correlation functions, here propagators, and sidesteps the issue of the vacuum wave functional). Far from attempting a complete theory, we will show the general features of the mechanism within a simple model of a fermion field coupled to a scalar boson through a Yukawa coupling. From this prospective, different flavored fermions, one corresponding to each family, are quanta of the same field, so the lagrangian can be more economically written. Each fermion is an elementary one-particle excitation on top of a vacuum that is a local minimum of the Hamiltonian. The excitations over the ground-state vacuum provide the lightest fermion family. Finally, at fixed coupling g, the Dyson-Schwinger equation has a finite number of solutions, so a finite number of families arise. The question on why three families becomes only a question of the value of g, the interaction coupling, and may have no special significance. The equivalent of the propagator Dyson-Schwinger equation in a non-covariant framework is the well studied mass gap equation of potential models of QCD. The equation has been numerically solved for the harmonic oscillator potential [9], [10], the linear potential [11], and the linear plus Coulomb potential [12,13]. Bicudo, Ribeiro and Nefediev have systematically studied the excited solutions of this equation [15]. Our results seem compatible with this prior work. Dyson-Schwinger equation and its numerical solution We examine a simple Yukawa theory for one fermion field coupled to a real scalar field with Lagrangian density in Euclidean space The Dyson-Schwinger equation for the fermion propagator in this theory is represented in figure 1 in the rainbow approximation (neglecting the vertex dressing). To examine generic features of dynamical chiral symmetry breaking we can further ignore the running of the wavefunction renormalization and set Z(p 2 ) = 1, a constant. This leaves one scalar equation for the fermion mass function that reads . (2) In the chiral limit m Ψ → 0 this equation becomes homogeneous and accepts a chiral-symmetry preserving solution M = 0 that continuously deforms into a soft-running mass form when m Ψ is not zero. But for strong enough kernels F it also admits other solutions that break chiral symmetry. The kernel in this simple Yukawa theory is with y the cosine of the polar angle in Euclidean fourdimensional space from p to q. To obtain excited solutions for M as function of p 2 we apply an iterative linear method with different initial guesses with or without nodes for M (p 2 ). This method proceeds by examining linear deviations of the exact solution M (p 2 ) = M 0 (p 2 ) + ǫ(p 2 ) (4) that yield . This can in turn be discretized (simply discretizing p) and cast as a linear system and solved with standard numerical linear algebra tools. The kernels require only the calculation of 2-dimensional integrals that are also standard fare in modern computers. By naive dimensional analysis one sees that the integral needs UV regularization. We accomplish this by a simple cut-off method. Then the parameters should be chosen to run with the cutoff g(Λ), m ψ (Λ) to ensure that the mass functions turn largely independent of Λ. In our results we display the dependence of the solution with g at fixed Λ. The boson mass m φ is chosen to be 1 and sets the unit of the theory. Numerical results and discussion In this section we present the chiral-symmetry breaking conventional and new excited state solutions. By running our computer program incrementing g we meet critical g values g 0 , g 1 , ...g n above which there are exactly n + 1 solutions, the last with n nodes, that we find numerically. For example the first solution appears for a critical g 0 ≃ 6.8 (of course this number runs with the cutoff). We show in figure 2 the three solutions obtained with g = 14, Λ = 200. Then in figure 3 we plot the nodeless solution at fixed cutoff incrementing g sequentially to show the dependence of the mass function on the coupling. Only the nodeless solution has been widely used in past literature for its applications in hadron physics. The others are sometimes rejected [16] on the basis of "wrong" UV asymptotic behavior, arguing that they do not match with conventional mass running in perturbation theory, and artificially imposing a boundary condition that discards all but one solution. We point out here that this mismatch poses no problem. One should interpret these solutions as the one-particle excitations above different extrema of the Hamiltonian or vacuum replicae. Therefore they cannot all be matched to the same perturbation theory around one particular vacuum. If one insists on viewing them from the one vacuum that connects smoothly to perturbation theory, then they have to be written down as complicated collective states in terms of the one-particle solutions over this vacuum. In the particular case of the Yukawa theory, the angular integral in eq. 3 can be performed analytically [14], but we prefer keeping a two-dimensional integral thinking of future work with more complicated model Lagrangians. In agreement with the findings of [15], we see that the higher excited vacua in this covariant Euclidean formulation present zeroes in the mass function. A version of the Sturm-Liouville theorem must be at work (this is not surprising since the Hamiltonian is hermitian). Also note that the values of M (0) generated in this covariant Yukawa theory show quite some hierarchy. In figure 2 one can see that they are well-spaced. Therefore this field-theory mechanism might conceivably be at work in the fermion flavor problem. However M (0) seems to be larger for the nodeless solution than for the excited ones (this puzzling property also appears in the equal-time approach of [15], and the extent of its model dependence needs to be further investigated). To confirm this point, we choose a different set of parameters and implementation of the computer code. With m φ = 100, a cutoff Λ = 10 4 implemented as a Gaussian fall-off of the integration measure (as opposed to terminating the grid), and g = 50, with a small bare fermion mass m ψ = 5 · 10 −4 , one obtains the three solutions depicted in confirms that the nodeless solution is higher in mass, and that it is possible to generate sizeable mass hierarchies between the solutions. We use this run to also illustrate the dependence of M 2 with the cutoff, displayed in figure 5. As can be seen, the excited solution does not decrease monotonically as g → 0, it rather seems to become a singular solution at the critical coupling. This technicality is however probably irrelevant for physical applications. In hadron physics the strength of the interaction being fixed by extensive phenomenology, there would be no freedom to vary g. In the physically relevant cases studied in [15], excited solutions exist at physical values of the coupling. In [17] Ribeiro and Nefediev have further extended their original results to include bound states (mesons) constructed with the replicated quasiparticles. In particular there are Goldstone bosons over the replicated vacuum, that are of course not massless pions but would appear as excited pion states (separated from the ground state pion by the mass gap between the two vacua). Whether any conventional mesons accept a more convenient description as replicated mesons instead of excited mesons over the standard vacuum remains an open question. Working in Euclidean space as we do provides fastly convergent integrals. This technicality may be avoided by employing a Lehmann representation [18] that allows a direct solution in Minkowski space with similar results. Experimental signature In a free field theory, a field quantum of mass m satisfies the equation that causes a pole in the free propagator In an interacting field theory this pole will be shifted due to renormalization from its bare to its physical position. The dependence on the renormalization scale can be traded by a dependence on the particle Euclidean 4momentum so that yielding the transcendental equation for the pole position Another way of visualizing the function M (p 2 ) of the interacting theory is to trade p 2 by the space-like part p 2 as argument of M , generating therefore a non-trivial dispersion relation for virtual particles, and the zeroes of the mass function M appear as points where E = |p| that would otherwise only be reached asymptotically at large p (see figure 7). Tests for a non-trivial (real particle) dispersion relation have been proposed and some carried out in the search for violations of Lorentz invariance [19]. However these are very indirect and usually performed at low energy, so an accelerator-based test of the (virtual particle) dispersion relation by separately measuring E, p and the off-shellness is preferable. In our case, real particles still provide representations of the Poincaré group and have constant, physical mass. However virtual particles off their mass shell will display the running mass function and this can be captured by analyzing the amplitudes of physical processes in perturbation theory. Dependence of M 2 with the coupling g them would present a node. In either case a zero is present in the mass function of one of the fermions. An interesting experiment is therefore a measurement of p, E and the off-shellness ∆ 2 = p 2 − M 2 (p 2 ) of a τ , at various momenta p. The B factories have accumulated large τ samples [23] that are under analysis and can be employed for this purpose. The process one may investigate is depicted in Fig. 10 in the appendix below. An electron-positron pair collides and annihilates into a τ − * τ + pair, where the τ − * is off its mass shell. Its four-momentum can be infered from the center-of-momentum energy of the incoming e − e + pair and by reconstructing the energy and momentum of the on-shell τ + from its decay products, as detailed shortly. Finally, one needs the off-shellness, but this can be obtained from the number of counts with given E, |p|. This is obvious since the cross section for a specific process where the off-shell τ − decays into for example l −ν l ν τ , depends on the off-shellness ∆ 2 of the τ − (see also App. A). Let us now detail the possible analysis: 1. In the center of mass frame of an e − e + collision one can identify two back-to-back hadron jets, tagging the flavor by demanding that one of them kinematically reconstructs an on-shell τ . This is rendered difficult by the undetected neutrino, that we assume to be the only unreconstructed track on the left side of the event. 2. Complete reconstruction of the energy and momentum on the left side of the event (see Fig. 8) is possible with a vertex detector (giving the direction of motion of the on-shell τ ). The total energy is taken from collider calibration, and matched to the energy of the visible tracks. Balancing energy provides the energy of the missing neutrino, and automatically the momentum of the left side, that can be tested against the hypothesis of a physical τ being produced. 3. Next one examines the right side of the reaction, where the secondary and primary vertex seem to coincide (as appropriate for a virtual, off-shell τ decaying rapidly). The total three-momentum is known a priori. One balances momentum to obtain the momentum taken by the undetected ν τ , and obtains the total energy. Now, it is not necessarily true that E 2 − p 2 = m 2 τ . 4. The number of counts with given E, p on the right side is normalized to a cross section in terms of the off-shellness σ(∆ 2 ) and used to obtain M from the value of the intermediate propagator in perturbation theory. An alternative possibility is to carry out the measurement on the expected O(20-30) τ events at the OPERA experiment [21] in Gran Sasso National Laboratory. Given the momentum of the neutrinos from the CNGS beam one can attempt full kinematical reconstruction of (E, p). However here the difficulty resides in the off-shellness since a small value will be forced for the identification of the τ . The experimental test for the muon and the electron is even simpler, for example through Compton and reverse Compton scattering where the fermion is off-shell in the intermediate state. For quarks one needs to take into account that they always appear in bound states, and therefore the mass function is always under an integral sign and reconstruction from experimental data difficult. However a particularly simple case is where the strangeness be completely tagged by counting all hyperons in both jets. In principle the same analysis carried out here could also be undertaken for Majorana particles but this is out of our scope in this preliminary work. Conclusions We have called attention to an interesting feature of field theory, namely the possibility of having several solutions of the one-particle Dyson-Schwinger equations for a broad Dispersion relation for a fermion with running mass Arbitrary units Fig. 7. The virtual dispersion relation at fixed renormalization scale for the solution with two nodes (we find similar result for one node). As the running mass actually has a zero, the dispersive curve touches the massless limit that would otherwise be reached only asymptotically. ττ ν τντ Fig. 8. In center of mass frame production where pτ + pτ = 0 one can fully reconstruct the decay of both τ leptons if only the neutrinos escape undetected. The analysis steps are described in the text. class of theories. The excited solutions appear upon increasing the coupling constant sufficiently, breaking chiral symmetry. The spectrum of states on top of each of the vacua is called "replica" or "recurrence" and provides a mechanism that might be interesting for the theory of flavor. The number of recurrences arises from no special symmetry, since non-linear equations can perfectly have a finite number of solutions (again depending on the value of the coupling, this can be chosen to be three). These solutions have been shown in the past in a Hamiltonian framework [15] and we have minimally extended the results of these authors to show that the covariant formulation allows similar phenomena. The excited solutions have mass functions that present zeroes and this reflects in the virtual dispersion relation (feature of an interacting field theory that should not be confused with breaking of Lorentz invariance). Whether these solutions will be relevant to the fermion family problem can be tested at B factories employing their large τ samples directly produced. At this point it is a meaningless exercise to attempt to obtain the fermion masses in this scheme. Three data points M i (0) should be used to fit three parameters, the current fermion mass (that we have here set to zero), the boson mass, and g at a given cutoff, so the current predictive power is null. However we find this field theory mechanism still worth attention given the ease with which fermion coupling universality can be incorporated by having three recurrences of the same field. Testing the off-shell dispersion relation should allow to immediately discard the scenario so we believe we have presented a valid physical hypothesis. Of course, the success of radiative corrections in the standard model make one wish for a measurement at the highest possible energy, maybe at a future linear collider. This work has been performed in the framework of the research projects FPA 2004-02602, 2005-02327, PR27/05-13955-BSCH (Spain) and is part of the Masters thesis of Mr. Páramo Martín presented to the faculty of U. Complutense). TVC is a postdoctoral fellow for the Fund for Scientific Research -Flanders and acknowledges the support of the "Programa de Investigadores Extranjeros en la UCM -Grupo Santander". A τ -pair production in e + e − collisions A.1 On-shell cross section We look at the e + e − → τ + τ − process depicted in Fig. 9, where all ingoing and outgoing particles are on their mass shell. Using the conventions of Ref. [22], the amplitude for the depicted process is For the unpolarized cross section, one needs the squared amplitude averaged over initial and summed over final spins Neglecting the electron mass, this results in This can be written in terms of Mandelstam variables as |M| 2 = 2e 4 s 2 t 2 + u 2 + 4 m 2 τ s − 2 m 4 τ , (17) where at high energies, the mass terms become negligible.
2014-10-01T00:00:00.000Z
2006-08-31T00:00:00.000
{ "year": 2006, "sha1": "95b5dad5c1ea7f50ab5ecd40c746bae345b466c2", "oa_license": null, "oa_url": "https://biblio.ugent.be/publication/385985/file/1136502.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "52344e101de45aa03c52977ffe4506898382d5f5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118700673
pes2o/s2orc
v3-fos-license
SUSY Enhancement of Heavy Higgs Production We study the cross-section of heavy Higgs production at the LHC within the framework of the Constrained MSSM. It is not only enhanced by tan^2 beta but sometimes is also enhanced by the squark contribution. First, we consider the universal scenario within mSUGRA and find out that to get the desired enhancement one needs large negative values of A_0, which seems to be incompatible with the b->s gamma decay rate. To improve the situation, we release the unification requirement in the Higgs sector. Then it becomes possible to satisfy all requirements simultaneously and enhance the squark contribution. The latter can gain a factor of several units increasing the overall cross-section which, however, is still smaller than the cross-section of the associated H b bbar production. We consider also some other consequences of the chosen benchmark point. Introduction With the launch of the LHC the expectations for discovery of the Higgs boson and possible new physics became actual. As usual, the production of heavy particles is suppressed by their masses, so that one expects to find the light particles first. However, sometimes the heavy particle production can be enhanced by some factors. This is exactly what happens with the heavy Higgs production in the MSSM 1,2 . We study this enhancement for the case of gluon fusion and show that not only the tan β enhancement takes place 3 , but also there is an additional source of enhancement due to the squark contribution in the loops. It is well known that the Higgs boson production at hadron colliders within the SM mainly goes through the gluon fusion process 4 . It is the triangle loop diagram (see Fig. 1) that gives the main contribution. This is also true in the MSSM, though in this case the associated production with two b-quarks (two b-jets) is even more favorable 1 . The latter process is realized at the tree level and, hence, has no new virtual particles involved contrary to the loop diagrams. Nevertheless, the triangle diagrams do not give additional b-jets in the final states and presumably can be distinguished from the associated production by b-tagging of these jets. Since we are talking about new particles in the loop, their contribution depends on their masses. The smaller the mass, the bigger is the contribution. At the same time, the squark contribution is also proportional to the quark mass, so only the third generation essentially plays any role. For numerical analysis we need the values of squark masses and mixings. We proceed in two ways: First, we consider the usual MSSM universal high energy parameters (m 0 , m 1/2 , A 0 , and tan β), evaluate masses and mixings, and calculate the cross-section for various points of parameter space. We find the areas in the parameter space where the loop enhancement takes place. This requires light top-squarks which is possible for very large and negative values of A t that implies negative A 0 . Then we consider the fulfillment of various constraints such as B → X s γ, 5 B s → µ + µ − , 6,7 g − 2 of muon (see, e.g., Ref. 8), relic density of the Dark Matter (DM) 9 , electroweak precision data on M W and sin 2 θ ef f (see, e.g., Ref. 10), and Higgs and superpartner searches in this region. We find out that the considered universal scenarios with large negative A 0 are not compatible with the b → sγ constraint. To avoid this problem and to have the cross-section at the level of a few pb, we release the universality constraint and allow the non-universal Higgs mass (NUHM) terms 11 . As independent variables we take the Higgs mixing term µ and the CP-odd heavy Higgs boson mass m A . Our overall conclusion is that for a relatively light stop and moderately heavy H 0 one can reach essential enhancement of heavy Higgs production albeit in the restricted region of the parameter space. Simultaneously, one gets relatively high cross-section for the stop pair-production which might also be of interest in view of SUSY searches. Cross-section for Heavy Higgs Production in the MSSM The Feynman diagrams describing the Higgs production via gluon fusion are shown in Fig. 1, where the last ones are due to squarks in the intermediate states. As was stated earlier, all the contributions are proportional to the quark masses, so only the third generation is relevant. The cross-section for the Higgs boson production with account of gluon distribution functions is given by 1,4,12 where g[x] is the gluon distribution function inside the proton that implicitly de- pends on the factorization scale Q. In our case, we take it equal to the Higgs boson mass. The matrix elements corresponding to the above diagrams are (we use the notation where v = 175 GeV) 13 where the angle α is the neutral Higgs mixing angle defined by tan 2α = Z tan 2β and is typically equal to α ≈ β − π/2. It should be noted that by definition α ∈ [−π/2, 0], so that sin α < 0, and the sign of the t-quark contribution is different for the light h and heavy H Higgs boson matrix elements. It is known 1 that for the lightest Higgs boson h (with the mass m h < 400 − 500 GeV) the loops with the bottom and top quarks interfere destructively. In contrast, in the case of the heavy boson H the interference is constructive and becomes destructive only when the mass of the heavy boson is above 400-500 GeV. As one can see from Eq. (2), the light Higgs boson h production is almost not influenced by tan β, while for heavy Higgses H and A the contribution of the bquark is enhanced by tan β and that of the t-quark is suppressed by tan β. Hence, for high tan β (which is of interest for us due to the enhancement of the cross-section) only the b-quark is essential. The addition of squarks is achieved by the following modification a : where the squark mixing parameters and the mixing angles are and the upper sign corresponds to squark 1 and lower sign to squark 2 . Note that due to the appearance of the quark mass squared versus tan β, the main contribution comes from the t-squarks and not from the b-squarks. The triangle functions entering into Eqs. a We have corrected some misprints in Ref. 13 and F 0 are complex functions of a single argument and get the imaginary part at the threshold when m qi = m Higgs /2 (see Fig. 2). At the threshold the modulus of F 0 is maximal and saturates the squark contribution. Thus, the desired enhancement of the cross-section is achieved at the threshold when the mass of the t-squark and the heavy Higgs boson are correlated and differ by a factor of 1/2. In all the formulas the values of quark masses and α s should be taken at the m Higgs scale. In what follows all the needed low-energy running parameters are calculated with the help of SOFTSUSY 3.1.6 15 code which does not only perform the RG evolution but also incorporates the important threshold effects, in particular, to the b-quark mass 16 , which is essential for our analysis. Universal soft SUSY breaking framework We start with the simplest mSUGRA-inspired scenario. Then in the MSSM with universal boundary conditions one has 4 parameters: m 0 , m 1/2 , A 0 and tan β. We take the sign of µ to be positive because of the SUSY contribution to g − 2 of muon 8 . In what follows, we fix tan β to be large of the order of 30 ÷ 50. This choice is motivated, on the one hand, by enhancement of the Higgs production cross-section and, on the other hand, by relic abundance of the DM in the Universe interpreted as a SUSY WIMP 3 . We present our results in the m 0 , m 1/2 plane varying the values of A 0 and tan β. As it will be clear later, the parameter A 0 has to be large, and it plays an essential role in the squark contribution. We have performed the calculations of the cross-sections according to Eqs. (1-3) with the MSTW2008-LO gluon distribution function 17,18,19 taken at Q ∼ m A ∼ m H . It is known that the leading order result can be substantially modified by the inclusion of high order (S)QCD corrections 12 . The net effect of the NLO b diagrams is usually summarized in the form of the so-called K-factors For small values of tan β the K-factor can enhance the cross-section by 100 %. However, it turns out that in the case of high tan β the K-factors for heavy Higgs bosons are much smaller (K = 1.1−1.2) and comparable with the overall theoretical uncertainty of the NLO result 26 . Since we are looking for enhancement by a factor of several units we ignore these subtleties here albeit of their importance in precision analysis. As it was mentioned above, to calculate the spectrum of superpartners and the other low scale parameters from the high energy ones, we use the RG running implemented in SOFTSUSY 3.1.6 code 15 . As the benchmark points, we choose three points in the m 0 , m 1/2 plane to be (m 0 , m 1/2 ) = (900, 300) GeV, (m 0 , m 1/2 ) = (1100, 300) GeV, and (m 0 , m 1/2 ) = (1700, 200) GeV, respectively, and allow A 0 to be positive and negative. This choice is dictated, on the one hand, by the requirement of smallness ofm t1 which gives the main contribution to the cross-section and, on the other hand, by restrictions on the parameter space coming from the other physical constraints 27 . The total cross-section for the heavy Higgs boson production as well as the ratio of the quark+squark cross-section to the quark one for three different benchmark points are shown in Fig. 3. The most significant contribution to the production cross-section, σ q+q , comes from the loop with the lightest squarkt 1 which gives almost 99% of the total value. The contribution oft 2 is suppressed by its heavy mass m 2 t /m 2 t2 and those of the b-squarks by the ratio m 2 b /m 2 b . The desired enhancement due to the squark contribution is achieved via the terms in Eq. (3b) proportional to sin 2θ t . The big enhancement can be obtained only for large and negative values of A 0 . One can understand qualitatively the result by noticing that the soft triple coupling A t which starts at A 0 at high scale tends to the IR fixed point at low energy 28 which is always negative. As a result, the absolute value of A t is minimal for positive A 0 and maximal for negative A 0 . At the same time, the stop mixing is proportional to A t and the bigger the mixing the smaller is the top squark massm 2 t1 and, hence, the bigger is the cross-section. So negative values of A 0 are favourable. Taking A 0 to be big and negative it is possible to get the total cross-section to be of the order of 0.1 pb with the enhancement factor due to squarks of the order of several units. One can clearly see in Fig. 3 that the highest values of the ratio R H ≡ σ q+q /σ q and the total cross-section σ q+q are achieved along the straight lines which correspond to the resonance with 4m 2 t1 /m 2 H = 1. This is due to the properties of the functions F 0 and F H 1/2 mentioned above and the fact that the squark and quark amplitudes interfere constructively at the threshold for A t < 0. As a result, one gets considerable enhancement of the cross-section with the leading role played by the lightest stop in the loop. The total cross-section reaches a fraction of pb that opens the possibility of earlier Higgs boson detection. The weak point of our analysis is the necessity of large negative values of A 0 which seems to contradict the fits 3 to the b → sγ decay rate for large tan β. It turns out that for the considered regions BR(B → X s γ) ≃ 10 −5 , which is an order of magnitude lower than the experimental value 5 (3.55 ± 0.24 ± 0.09) × 10 −4 . A careful investigation of the problem shows that for negative A 0 the chargino-stop contribution 29,30 to the Wilson coefficient C 7 c , which influences the b → sγ rate at the leading order, tends to cancel the contributions due to charged Higgs and W-boson. In the considered scenarios the correction C χ 7 has the same order of magnitude as the sum C W 7 + C H 7 from charged Higgs and W-boson. Since BR(B → X s γ) LO ∝ |C 7 | 2 , one can immediately deduce that the corresponding branching ratio is lower than that of the SM. Moreover, it turns out that the constraint due to the DM relic density is also hard to fulfill in the considered regions, and the non-observation of B s → µ + µ − (see Refs. 6, 7) forbids the most promising part of the plane with tan β 45. To overcome the above-mentioned difficulties with b → sγ, one can consider positive A 0 in which case all the corrections to C 7 have the same sign and there is a good chance to have a proper value of the branching ratio. However, in the universal SUSY breaking scenarios with positive A 0 it is impossible to have simultaneously large SUSY enhancement and a high heavy Higgs production cross-section. For example, choosing the low value of m 1/2 ≃ 200 GeV, moderate m 0 ≃ 500−600 GeV, and tan β ≃ 25 − 30 one can reduce the lightest stop mass below 200 GeV with the help of large A 0 ≃ 2000 GeV. However, for this set of parameters the Higgs mass m H is too large, i.e., m H ≫m t1 , and, consequently, the total cross-section is very small. The other possibility would be significant enhancement of the chargino-stop loop so that |C χ 7 | is an order of magnitude larger than |C W 7 + C H 7 |. This effect strongly depends on the value of the µ-parameter which influences the masses and mixing of charginos. In the mSUGRA parameter regions considered here we have µ ∼ 1 TeV and C χ 7 is suppressed. Clearly, to save the situation and to get a reasonable phenomenological impact from the squark contribution we are forced to release the universality constraint for the Higgs masses and consider the NUHM model 11 . c that corresponds to the effective operator O 7 = e 2 /(16π 2 )m b (s L σ µν b R )Fµν . Non-universal soft supersymmetry breaking The non-universality in the Higgs sector parameterized by the pole mass of the CP-odd heavy boson m A and the running µ-parameter at the SUSY scale (NUHM scenario) provides us with the possibility to overcome the above-mentioned difficulties of the universal scenarios for both negative and positive A 0 . For the large tan β scenarios m H ≃ m A and we have enough freedom to obtain significant enhancement in the σ gg→H cross-section by adjusting m A . Moreover, since we also can adjust the µ-parameter, it is possible to fulfill the b → sγ constraint by the increase of the chargino contribution mentioned in the previous section. In the first place, we consider the case with negative A 0 and try to find a region that satisfies all the above-mentioned experimental constraints. In order to enhance the chargino contribution to the b → sγ decay rate, we need to decrease the value of the µ parameter. This, in turn, lowers the scale of stop masses. As a consequence, the lightest stop can become an LSP or even a tachyon if we consider very large values of |A 0 | which were chosen in the previous section. This kind of reasoning justifies our choice of A 0 given below. As a benchmark point we have used the following set of NUHM parameters m 1/2 = 250 GeV, m 0 = 625 GeV, µ = 240 GeV, m A = 340 GeV, A 0 = −1175 GeV, tan β = 30. This point lies in the region bounded by the experimental constraints mentioned above. Obviously, it is very hard to visualize the allowed region in the space of six free parameters. In what follows, we present in Fig. 4 the twodimensional sections of the region in m 0 − A 0 , m A − µ and tan β − A 0 planes, respectively. One can see how the allowed regions due to various constraints intersect with each other. For the calculation of the flavour observables and the relic density we use the SuperIso (Relic) code 31,32,33 , and the bounds for b → sγ d , B s → µ + µ − , and Ωh 2 correspond to 95 % CL. A point marked by the cross corresponds to the chosen benchmark scenario. This choice is somewhat random within the allowed region. Looking at the plot in Fig. 4 one can see where the allowed region moves when varying one or more parameters. For example, looking at the A 0 − tan β plane one can deduce that with a slight increase of tan β both the b → sγ and the B s → µ + µ + rates go up and the allowed strips in the m 0 − A 0 and m A − µ planes effectively move towards the lower values in the corresponding figures. However, the correlations between the degrees of freedom are strong enough, so that it is hard to get the entire picture. In what follows we try, at least qualitatively, to explain the key features of the emerged picture. Since the stop mass scale depends crucially on the m 1/2 parameter, we restrict ourselves to the value m 1/2 = 250 GeV. All the other parameters are allowed to vary. It turns out that the constraints due to the muon anomalous magnetic moment and electroweak precision data e are satisfied in the whole region studied (1 a µ × 10 9 2.5, ∆ρ 5 · 10 −4 ), so we do not draw the corresponding bounds. In the same figures, the SUSY enhancement of the Higgs production via the gluon fusion is demonstrated with the help of the ratio R H = σ q+q /σ q . Clearly, due to the fact that the quark contribution for our case is not very small, the enhancement is not very big in comparison with the results presented in the previous section, e.g., R H ∼ 5 for the benchmark point. Again, the value of R H correlates with xt 1 ≡ 4m 2 t1 /m 2 H . At the lightest stop production threshold it is maximal and R H decreases more rapidly when xt 1 > 1. In spite of the moderate enhancement the total cross-section is of the order of pb at the stop production threshold. At the top of Fig. 4 the plane m 0 − A 0 is shown for fixed tan β = 30, m 1/2 = 250 GeV, m A = 340 GeV, and µ = 250 GeV. One can see that the parameters A 0 and m 0 are correlated within the allowed band. This correlation corresponds to a constant value of the lightest stop mass lying in the range 150 − 200 GeV and can be easily explained by the fact that an increase in the stop mass with m 0 can be compensated via a see-saw like mechanism by an increase in the off-diagonal term in the stop mass matrix driven by the absolute value of A t . Clearly, both the B → µ + µ − and B → X s γ rates go down withm t1 . In the middle of Fig. 4, we show how the allowed bands due to various constraints intersect in the m A − µ plane. One can notice the dependence of the b → sγ rate on µ which somehow supports our hypothesis about the dominance of the chargino contribution to C 7 Wilson coefficient for small µ. With the increase of m A the charged Higgs mass increases correspondingly. As a consequence, the sum C H 7 + C χ 7 becomes bigger, thus, slightly increasing the branching fraction. The correct amount of the Dark Matter can be achieved if LSP annihilates via the virtual CP-odd Higgs boson in the s-wave. For this to happen, the neutralino mass m χ 0 should be adjusted to half m A . In our case, for fixed m 1/2 = 250 GeV the neutralino is mostly bino with m χ 0 around 100 GeV. Moreover, if µ is comparable with m 1/2 the fraction of higgsino component in χ 0 becomes larger and also increase the cross-section which is proportional to the mixing between the gaugino and higgsino components for the s-wave annihilation. These two facts explain, at least qualitatively, the behavior of the curves with constant value of the DM relic density. For low µ ∼ 200 GeV it is sufficient to have m A ≃ 400 GeV to obtain the correct value of Ωh 2 . However, when due to the increase of µ the mixing between the gaugino and higgsino components becomes small, one needs to lower m A to be closer to the A 0 -resonance to enhance the annihilation cross-section. For the considered value of tan β = 30 the upper bound from B s → µ + µ − excludes m A 330 GeV. All the constraints are satisfied in the small region near our benchmark point. Finally, at the bottom of Fig. 4 the tan β − A 0 plane is shown. It is easy to notice that large tan β 30 are excluded by the B s → µ + µ − constraint since the dominant SUSY contribution to this decay scales as tan 6 β. 35 In the allowed strip due to the b → sγ constraint the parameters A 0 and tan β are correlated since the enhancement due to tan β is compensated by the increase ofm t1 due to A 0 . The relic density constraint fixes tan β to be around 30. A tail of the Ωh 2 region corresponds to the stop co-annihilation. In summary, the key features of the allowed region are the following: m 1/2 ∼ µ ∼ 250 GeV (which influence significantly the lightest stop mass, b → sγ, and the mass and content of the lightest neutralino), tan β ∼ 30 (mostly due to the Ωh 2 constraint), m 0 and A 0 should be correlated (due to the stop mass), and m A 300 should not be very large (to have the Higgs production cross-section at the level of 1 pb). For our benchmark point the heavy CP-even Higgs boson decays predominatly into the heavy down-type fermions, i.e., bb (∼ 90%) , ττ (∼ 10%). The latter signature has already been analyzed by both the ATLAS 36 and CMS 37 collaborations and important bounds on m A and tan β were deduced. However, the scenarios with m A > 300 GeV and tan β < 50 are not excluded at the moment. Before going to conclusions let us mention the situation with the case of A 0 > 0. It should be noted that contrary to the A 0 < 0 case, positive A 0 leads to destructive interference between the squark and quark amplitudes at the stop threshold in the cross-section for heavy neutral Higgs production. The only possibility to enhance the cross-section is to be slightly below the thresholdm t1 m H /2 when the corresponding squark amplitude develops a negative imaginary part. If we choose m A to be around 350-400 GeV, the SUSY enhancement with R H ∼ 10 is possible form t1 ≃ 110 GeV. However, due to the behaviour of RGE for A t , the large initial values of A 0 > 0 lead to a relatively small positive A t at the SUSY scale. In order to obtain the light stop needed for large R H via the see-saw like mechanism, the overall stop mass scale should not be very big. Unfortunately, this latter fact prevents us from finding a suitable region in the parameter space with A 0 > 0, since it turns out that for a setup like this the lightest Higgs boson mass is around 100 GeV, which is excluded experimentally (we use HiggsBounds 2.0 38 package for confronting our predictions with the LEP bound). In contrast, for the A 0 < 0 scenario we have m h0 ≃ 118 GeV. Discussion The search for the Higgs boson seems to be the main goal for the LHC today though the appearance of the new physics would be the major breakthrough. One can see that even if the "new physics" is represented by the enlargement of the Higgs sector, the cross-section of the Higgs production can be essentially enhanced due to the large value of tan β = v 2 /v 1 . This enhancement might even lead to preferable observation of a heavy Higgs boson rather than the light one. At the same time, if SUSY or some other heavy particles exist, the enhancement of the Higgs production can be pushed even further. This latter enhancement, however, is valid only for the restricted set of parameters subjected to two requirements: one of the intermediate particles (the lightest top squarkt 1 in our case) has to be relatively light and has to be close to the resonance with the Higgs boson. The allowed region in the parameter space found here seems to be very narrow mostly due to the relic density constraint. However, this impression is not true since in each plane shown in Fig. 4 all the other parameters are fixed. In the whole parameter space the allowed volume with σ q+q 1 pb and R H ≃ 3 − 5 is obviously bigger. For example, the benchmark point parameters can be shifted to tan β = 25 and µ = 210 GeV at the price of slightly lower values of R H ∼ 3 and σ q+q ∼ 0.5 pb. Our main goal was to study the influence of squarks on the heavy Higgs boson production and to find the regions of the MSSM parameter space, for which the cross-section via the gluon fusion process can be essentially increased. However, in the considered scenarios compatible with known experimental constraints it is still lower than the associated production accompanied by two b-quarks 1 (see diagrams shown in Fig. 5).b g g gb bb Indeed, with the help of CalcHep 39 package the total cross-section for pp →bbH process is estimated to be around 7 pb at √ s = 14 TeV f for our benchmark point (tan β = 30, and M A = 340 GeV). This is an order of magnitude larger than the gluon fusion cross-section evaluated above. Hence, it is very hard to "see" the gluon-fusion on top of the bbH process. It should be pointed out, however, that there are no virtual superpartners in the diagrams in Fig. 5 so the same cross-section is expected within any Two-Higgs Doublet Model (THDM) with large tan β. As a conseqence, a complimentary search is required to discriminate between different THDM possibilities. It is worth mentioning the other phenomenological implications of the chosen benchmark point with A 0 < 0. In the considered case the lightest top squark is almost degenerate with the top quark and its dominant decay channel ist 1 → χ + 1 b (we use SUSYHIT code 41 to calculate the branching fractions). This mode was not so extensively analyzed at the Tevatron and the current bounds for the stop production at √ s = 1.96 TeV are far above the theoretically predicted values 42 . However, at the LHC they can be produced abundantly. For example, for our benchmark point the stop pair production cross-section at √ s = 14 TeV that was obtained with the help of the Calchep package 39 is around 55 pb (in comparison with approximately 8 pb for √ s = 7 TeV). The lightest chargino χ + 1 produced in the stop decay has the mass slightly below the neutralino-W-boson threshold (m χ + ≃ 170 GeV m χ 0 + m W ), so it decays into the lightest neutralino and a fermion-anti-fermion pair coming from the virtual W -boson. It turns out that the chargino decays into the light quarks with 66 % probability. In 33 % cases it produces leptons. As a consequence, we have the following key signature for the stop pair production: two b-jets coming from the decay of the stops, missing energy E T from two neutralinos, and light-quark jets or leptons from the virtual W -bosons (see Fig. 6). It is obvious that for the considered value of the stop mass the final states are similar to that of the top pair production so one can search fort 1t1 signal in the tt event sample as it was done in Ref. 42. The blob corresponds to all the tree-level diagrams contributing to the stop production. The final states include two b-jets, missing energy E T , the light-quark jets, and leptons. With almost equal probability (45 %) the virtual W -boson produces either four jets or two jets accompanied by a charged lepton and additional missing energy from neutrino. In 10 % of cases two W -bosons decay leptonically, and instead of the light-quark jets we have two charged leptons and additional E T from two neutrinos. The ATLAS collaboration has already performed a study of such signatures 43 at √ s = 7 TeV with real data obtained in 2010 (so-called one-lepton analysis with b-jets and missing transverse energy). Their results can be interpreted as exclusion limits in the (mg,m t1 ) plane (mg being the gluino mass) and, according to Fig. 3 of Ref. 43, the stop production cross-section should be smaller than 15-40 pb form t1 ≃ 180 GeV depending on the gluino mass which varies in the range 350-620 GeV. Since for our benchmark point the cross-section of stop pair production with the given final states is approximately σt 1t1 × BR(t 1t1 → bqq ′ blν) = 8 × 0.45 = 3.6 pb and mg ≃ 630 GeV, it seems that we escape the current ATLAS bound. However, the searches for the light stop production seems to be very challenging and we attract attention to this decay mode. Another interesting point is that the B s → µ + µ − branching fraction almost touches the experimentally allowed boundary line so it may happen that this rare decay would be observed at the LHCb experiment in the near future. Thus, our main conclusion is that there exists a possibility when the crosssection of the single Higgs production is large enough to favour its observation at the LHC even with intermediate luminosity. In addition, the search for the lightest stop production in thet → χ + b mode seems to be within the reach of the LHC at the early stage. Whether we are lucky or not will be clear only a posteriori. However, any favourable possibility should not be missed.
2011-10-14T11:32:38.000Z
2011-06-22T00:00:00.000
{ "year": 2011, "sha1": "d790058e46a3299b8cbcfb369886bb0e0cd9220b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1106.4385", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d790058e46a3299b8cbcfb369886bb0e0cd9220b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17802139
pes2o/s2orc
v3-fos-license
Evaluation of performance metrics of leagile supply chain through fuzzy MCDM Article history: Received October 2, 2012 Received in Revised Format March 12, 2013 Accepted March 14, 2013 Available online March 15 2013 Leagile supply chain management has emerged as a proactive approach for improving business value of companies. The companies that face volatile and unpredictable market demand of their products must pioneer in leagile supply chain strategy for competition and various demands of customers. There are literally many approaches for performance metrics of supply chain in general, yet little investigation has identified the reliability and validity of such approaches particularly in leagile supply chains. This study examines the consistency approaches by confirmatory factor analysis that determines the adoption of performance dimensions. The prioritization of performance enablers under these dimensions of leagile supply chain in small and medium enterprises are determined through fuzzy logarithmic least square method (LLSM). The study developed a generic hierarchy model for decision-makers who can prioritize the supply chain metrics under performance dimensions of leagile supply chain. © 2013 Growing Science Ltd. All rights reserved. Introduction Supply Chain Management has been defined by "The Council Of Logistics Management" (2000) as "the systematic, strategic coordination of the traditional business functions and tactics across these businesses functions within a particular organization and across businesses within the supply chain, for the purpose of improving the long term performance of the individual organizations and the supply chain as a whole" (Li et al., 2005).Supply chain can be considered as a set of activities that are used by any firm to provide value for its customer either as a product or service, or a combination of both (Samaranayake, 2005).Leagile supply chain is defined as the combination of lean and agile paradigms that, applied to the strategy of supply chain, respond satisfactorily, to the volatile market demands (Van Hoek et al., 2001).There is an important difference between the performance of lean supply chain and agile supply chain.Generally, lean supply chains (or efficient) are appropriate for functional stable products and services, while the agile supply chain (or responsive) are better suited for products and services that are innovative and less predictable (Slack et al., 2008).Leagile supply chain has not been considered as a strategic concept, it can be thought of as a support for the cumulative model of lean and agile practices, because the leagility allude to some degree of the overlap between leanness and agility (Narasimhan, et al. 2006).To achieve leagility the de-coupling point is be located at the final assembler.An action that usually requires is product rationalization (Hau & Margurita, 1995).Specific products are now pulled by current sales demand whilst upstream of the de-coupling point suppliers now work to level schedules.Hsu and Hu (2008) examined the consistency approaches by factor analysis, which determines the adoption and implementation of green supply chain management in Taiwanese electronic industry.The fuzzy analytic hierarchy process method is applied to prioritize the relative importance of four dimensions and twenty approaches among nine enterprises in electronic industry.Motadel et al. (2011) identified and prioritized five supply chain agility indicators in the automotive industry of Tehran.The results proved that among the five dimensions of supply chain agility, Information Technology and Flexibility are the most important indicators.Agarwal et al. (2006) proposed a framework, which encapsulated the market sensitiveness, process integration information driver and flexibility measures of supply chain performance.The proposed framework analyzed the effect of market winning criteria and market qualifying criteria on the three types of supply chains: lean, agile and leagile.Gunasekaran et al. (2004) developed a framework to promote a better understanding of the importance of SCM performance measurement and metrics.Arawati et al. (2008) analyzed the relationships between strategic supplier partnership practices, product quality performance and business performance and their associations through correlation, cluster analysis and Structural Equation Modeling (SEM).Bhatnagar and Sohal (2005) identified the manufacturing industry of Asian as the research targets and proposed supply chain performance measurement indicators on plant location factor, supply chain uncertainty and manufacturing practices to measure supply chain competitive advantages.Özkir and Demirel (2011) explored some strategies for design and performance measurement of different supply chain types based on fuzzy entropy approach.Sower and Abshire (2011) examined the impact of RFID technology utilization on organizational agility in manufacturing firms.The results showed that the implementation of RFID technology could result in improved organizational agility resulting in improved performance. From the current literature, it is observed that performance measurement and metrics pertaining to leagile supply chains have not received adequate attention from researchers or practitioners.Hence, in this paper, a generic hierarchical model for leagile supply chain performance measurement is developed through CFA.Further, weights of the enablers under each performance determinant are determined in fuzzy environment. Organizational Performance (ORP) The definition of organizational performance depends on the views of different stakeholders.According to Vickery et al. (2007), organizational performance refers to how well an organization achieves its market-oriented goals as well as its financial goals.Thus, they set up the measurement performance items as return on assets, market share and growth rate.This study followed the indicators adopted by Barua et al. (2004) and Li et al. (2006) as the base for designing the questionnaire evaluating organizational performance, including market share, sale growth, ROI and green image.The items of organizational performance (Chen et al., 2006) measurement was based on the related executives' evaluation and judgment with regard to the market share , sale growth and profit margin on sales of the company (comparing with the last year).The item scales are five-point Likert scales with 1 = significant decrease, 2 = decrease, 3=same as before, 4=increase, 5=significant increase. Organizational Performance (OP) Lippman ( 2001) interviewed operations managers and reported that most of them claimed they were experiencing increases in their operational outcomes, such as reduction in cycle time, cost and quality improvement.In this study, product cycle time, due date performance, cost and quality are considered as performance enablers under operational performance.These enablers will influence the competitors of the market.Questions were based on Likert five-point scale for evaluating the managers' perception on each performance enabler. Customer Service Performance (CSP) Customer service performance is the ability to respond to customers' ever-changing wants and needs in a timely way (Zelbst et al., 2010).The utilization of technologies such as RFID can lead to agility in organizations.Agile organizations have the capability to respond to unexpected changes and increase processing speed, thus increasing customer service performance.The integration of information technology is likely to result in more agility for an organization resulting in better response to market changes as well as enhancing the capability to sense, perceiving and anticipating market changes.In this study, customer satisfaction, delivery dependability, responsiveness and order fill capacity are considered as performance enablers under logistic performance. .Questions were based on Likert five-point scale for evaluating the managers' perception on each performance enabler. Flexibility (FL) According to Slack (1987), flexibility is defined as two dimensional, Swafford et al, (2006) defined flexibility using two dimensions called range and adaptability.Range is defined as the number of different positions, or flexible options achieved with existing resources.Adaptability is the ability to change the existing number of states.In this study, product development flexibility, sourcing flexibility, manufacturing flexibility, and information technology flexibility are considered as performance enablers under flexibility.Questions were based on Likert five-point scale for evaluating the managers' perception on each performance enabler. The conceptual model The proposed model is based on four performance indicators-(i) Operational Performance (OP); (ii) Customer Service Performance (CSP), (iii) Organizational Performance (ORP) (iv) Flexibility (FL).In this study, in order to determine the domain that encompasses Leagile performance dimensions exhaustive theoretical, empirical and practitioner literature were reviewed.A conceptual frame work is developed by incorporating ideas, theories and studies from literature.The conceptual frame work is shown in the Fig. 1. Hypotheses Research Question: How the above performance enablers will influence the legality of a supply chain? In this context, the following hypotheses are introduced.This study examines the consistency approaches by confirmatory factor analysis that determines the construct validity, convergent validity and internal consistency of performance enablers of leagile supply chain.Further, the weights of the performance enablers under each performance indicator are determined by using Fuzzy Analytic Hierarchy Process (FAHP). Confirmatory Factor analysis CFA requires the specification of a factor model, including the number of factors and the pattern of zero and nonzero loadings on those factors.A small number of theory-driven competing models might be specified as well.CFA provides information on how well the hypothesized model explains the relations among the variables.CFA has the advantages of allowing hypothesis testing on the data.The confirmatory factor analysis was done using LISREL 8.5.The measurement model fit with the data was checked with chi-square goodness-of-fit, and approximate fit indexes.Insignificant model chi-square goodness-of-fit (set at 0.05) signifies model fit.Fit Index (GFI), Adjusted Goodness of Fit Index (AGFI), Normed fit index (NFI), relative fit index (RFI), incremental fit index (IFI), Tucker-Lewis fit index (TFI) and comparative fit index (CFI) of above 0.9 would indicate model fit .For another approximate fit index, root mean square error of approximation (RMSEA), a value less than 0.08 Root Mean Squared Residual (RMR) value less than 0.05 would signify reasonable model fit.Significance of standardized regression weight (standardized loading factor) estimates signifies that the indicator variables are significant and representative of their latent variable. Fuzzy analytical hierarchy process (FAHP) In Analytic Hierarchy Process, vagueness in the decision maker's subjective judgments are not incorporated in determining the relative weights of the criteria.In order to eliminate this limitation, fuzzy modification of the AHP, is necessary for tackling the above uncertainty and imprecision of the process. Determination of priorities from fuzzy pair wise comparison matrix The assessment of local priorities, based on pair wise comparisons needs some prioritization method to be applied.However, the standard AHP Eigen value prioritization approach cannot be used, when the decision-maker faces a complex and uncertain problem and expresses his/her comparison judgments as uncertain ratios, such as 'about two times more important', 'between two and four times less important', etc.A natural way to cope with such uncertain judgments is to express the comparison ratios as fuzzy sets or fuzzy numbers, which incorporate the vagueness of the human thinking.When comparing any two elements at the same level of the decision hierarchy, the uncertain comparison judgment can be represented by the fuzzy number .In this paper, triangular fuzzy numbers, which are a special class of the L-R fuzzy sets, is adopted.= (l ij , m ij , u ij ) where l ij , m ij and u ij are described by the measures between 1 and 9, corresponding to the mean, the lower and the upper bounds of triangular membership function respectively.The fuzzy membership functions are defined as very low-(1, 1, 3); low-(1, 3, 5); medium-(3, 5, 7); high -(5, 7, 9); very high-(7, 9, 9); 12 1n The normalized triangular fuzzy weight vector of the matrix A  can be expressed as given below. w (w , w ....w ) ((w , w , w ), (w , w , w ),....(w , w , w )) where w The fuzzy logarithmic least square method (LLSM) developed by Wang et al (2006) The triangular fuzzy number w i  = (w ,w ,w ) il ig iu can be defuzzified by the following equation to obtain the crisp relative importance weight. Main stages of FAHP The FAHP divides the decision problem into the following main steps (Mikhailov et al., 2003). Problem structuring The FAHP decision problem is structured hierarchically at different levels with each level consisting of a finite number of decision elements.The top level of the hierarchy represents the overall goal, while the lowest level is composed of all possible alternatives.One or more intermediate levels embody the decision criteria and sub-criteria. Assessment of Local priorities The relative importance (weights) of the decision elements (criteria and alternatives) is assessed indirectly from comparison judgments during the second step of the decision process.The decisionmaker is required to provide his/her preferences by comparing all criteria, sub-criteria and alternatives with respect to upper level decision elements.The values of the weights and scores are elicited from these comparisons . Calculation of global priorities Overall weight vector of the sub-criteria at the level prior to the final level is calculated by successively multiplying the priorities from previous level to subsequent levels. Survey Questionnaire Survey questionnaire is developed from an extensive literature review, which examined a number of streams of research, including lean and agile supply chains, supply chain strategies, design requirements for various supply chains, confirmatory factory analysis.Twenty questions on the performance Indicators such as Operational Performance (OP); Customer Service Performance (CSP), Organizational Performance (ORP) and Flexibility (FL) are developed.The survey was sent to the medium and small organizations of Andhra Pradesh.The survey was addressed to personnel involving purchasing, production, marketing& sales, logistic providers with mailing and personal contacts.A total of 225 out of 300 usable surveys were received.Another 20 surveys were returned and were not applicable because the respondent was no longer with the company.This resulted in an effective response rate of 75 percent. Descriptive statistics A summary of the demographic characteristics of the sample is presented in Table 1.We have received 225 responses from three types of medium and small scale industries, namely (i) apparel manufacturing (ii) automotive spare parts and (iii) electronic components indicates that their interest in leagile supply chains.Responses indicate that people from important business are involved.Customer types namely Retailer, Bulk Manufacturer, Distributor and Customer direct are involved in the study.Approximately 75% had more than three years of working experience.This highlights the importance of working experience in the implementation of leagile supply chain management systems.The study tested the measurement properties of the constructs (performance indicators) by confirmatory factor analysis.CFA was used to evaluate how well the measurement items for reflect latent variables in the hypothesized structure, due to the fact that this study is based on the theoretical basis from the previous research.Average Variance Extracted of each latent variable was more than 0.7 which showed that latent variables had reliability and convergence validity.The data of Average Variance Extracted (AVE) of Squared Multiple Correlation (SMC), Construct Reliability (CR) and latent variables are presented in Table 2.The standardized factor loadings (>0.6) of the items indicate all the performance enablers are significantly related to their latent variables.In addition, Average Variance Extracted (AVE) of each the latent variable is greater than the cutoff point (0.50) indicates the convergent validity.Composite Reliability (CR) and Average Variance Extracted (AVE) was more than 0.6 and 0.5 respectively indicating good construct reliability and adequate convergent validity.These findings suggested that the16 items of four latent variables were reliable and had a high level of internal consistency.The latent variables (Operational performance, Customer service performance, Organizational performance and Flexibility) were evaluated based on the statistical significance of the indicator loadings with their reliability and variance extracted.Each variable's tvalues associated with each of the loadings exceed the critical values for the .05significance level, thus showing that all variables are significantly related to their specified latent variables.Basing on the factor loadings, it can be concluded that there exists significant relationship between performance enablers and the respective performance indicators in respect of leagile supply chain.The fit indices of the structure model of confirmatory factor analysis are shown in table 3. The value of χ2/d.f is 5.3 indicates the close fit of the model (Carter and Wu, 2010).As to the propriety of model, GFI value was 0.77, AGFI was 0.68, CFI was 0.98 indicates the highly close fit.Therefore, there were enough evidences to accept all the propositions (H1, H2, H3 and H4) were supported.It is an established fact that root mean square error of approximation (RMSEA) and standardized root mean square residual (SRMR) are also measures for model fitness.SRMR values less than 0.08 and RMSEA values less than 0.06 imply very good models (Brown, 2006;Hu and Bentler, 1999).The values of RMSEA (0.14) and SRMR (0.05) obtained in the study indicates the satisfactory fitness of the model.Therefore, generally speaking, the measurement model of this Leagile Supply chain suggesting a reasonably acceptable fit to the data. Prioritization of Performance Enablers The FAHP decision problem is structured hierarchically at different levels with each level consisting of a finite number of decision elements.The top level of the hierarchy represents the overall goal, while the lowest level is composed of all possible alternatives.One or more intermediate levels embody the decision criteria and sub-criteria.The hierarchy to analyze the performance of Leagile supply chain is shown below.The goal of the decision hierarchy is to analyze the performance of Leagile supply chain.Level 1 indicates the leagile supply chain performance indicators.Level 2 indicates the performance enablers. Local priorities The relative importance (weights) of the decision elements (criteria and alternatives) is assessed indirectly from fuzzy pair-wise comparison judgments.Fuzzy prioritization problem is developed using fuzzy pair-wise comparison matrices of performance indicators and performance enablers shown in the following Tables. Table 4 Fuzzy pair-wise comparison of performance indicators Performance Indicators Customer service performance Flexibility Operational performance Organizational performance Customer service performance (1,1,1) Fuzzy pair wise comparison matrices shown in Tables (4.1, 4.2….4.5) are used to determine the priority of the performance indicators and enablers by solving non-linear programming as discussed in section 3.3 using LINGO solver.Overall weight of the enablers at the level prior to the final level is calculated by successively multiplying the priorities from previous level to subsequent levels. Calculation of global priorities The global priorities of Performance enablers of Leagile supply chain are shown in Table 5.In order to determine the importance of the performance indicators and enablers, the judgments collected from respondents to prepare the fuzzy pair wise comparison matrices.From these matrices fuzzy logarithmic least square method is adopted to determine the priority of Performance determinants and enablers. Conclusion This study suggested that the four-factor model with 16 items of the performance measurement of a leagile supply chain had a good fit.It is a valid and reliability measurement to identify the importance performance enablers under each performance indicator.Further, the relative weights of the enablers are determined in fuzzy environment.Responsiveness is emerged as the most important enabler.Product development flexibility, customer satisfaction and sourcing flexibility may be considered as equally important enablers.The enablers namely, order filling capacity, delivery dependability, product cycle time and market share are moderate important.The remaining enablers (Quality, cost, manufacturing flexibility, IT flexibility, return on investment, green image and sales growth) are relatively less important.The priority approaches for measuring the performance of leagile supply chain show the respondents' perceptions about the importance of them and assisted organizations recognize their strengths to move towards continuous improvement.Further, priority approach in fuzzy environment takes care of uncertainties in the subjective opinions of the stake holders. Although the previous literature has contributed to recognize various approaches in measuring the leagile supply chain performance little is known about the confirmatory and priority approaches, particularly in small and medium enterprises.The main strengths of this paper, hence, are two-folds: It recognizes the consistency approach and provides a method for prioritizing the performance enablers.This study proposed the use of FAHP to prioritize the performance enablers of leagile supply chain.In addition, the model also can help managers improve their understanding of performance measurement of leagile supply chains and enables decision makers to assess the performance of leagile supply chains.Furthermore, the application of analytical tool in determining weights for various performance enablers of leagile supply chain practice is suggested to utilize analytic network process (ANP) in terms interdependency property. H 1 : Enablers of operational Performance (OP) constitute an indicator of leagile supply chain Performance H 2 : Enablers of Customer Service Performance (CSP) constitute an indicator of leagile supply chain Performance H 3 : Enablers of organizational Performance (ORP) constitute an indicator of leagile supply chain performance H 4 : Enablers of flexibility (FL) constitute an indicator of leagile supply chain performance Fig. 1 . Fig. 1.The Conceptual Frame work of Leagile Supply Chain Performance Enablers For approximate fit indexes, Goodness of Table 2 Reliability and validity analytical results of measurement model Table 3 Fit Indices of Structure Model Table 4 Considering the global weights in Table5, it is evident that the sixteen prioritized enablers for measuring the performance of Leagile supply chain.Basing on the global weight the enablers are categorized into three groups.Responsiveness (0.1835), Product development flexibility, (0.1494), Customer satisfaction (0.1439) and Sourcing flexibility (0.1047) are emerged as the most important enablers.Order filling capacity (0.0604), Delivery dependability (0.0564), Product cycle time (0.0531) and Market share (0.0513) are evolved as moderate important performance enablers.The remaining enablers namely, Quality (0.0412), Manufacturing flexibility (0.0344), IT flexibility (0, 0344), Cost (0.0203), Return on investment (0.0289), Green image (0.0164), Due date delivery performance (0.0137) and Sales growth (0.0121) are of less importance in measuring the performance of leagile supply chain.
2016-01-25T19:18:26.375Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "37c463c11be65fc2d941d46310b9c4f06e151f70", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5267/j.dsl.2013.03.003", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "37c463c11be65fc2d941d46310b9c4f06e151f70", "s2fieldsofstudy": [ "Business", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
261991639
pes2o/s2orc
v3-fos-license
Milk yield variation in first three lactations and factors affecting milk yield in Sahiwal cattle The genetic evaluation based on life time production is gaining importance. Longer productive life not only reduces rearing costs per year of productive life but also allows exploitation of maximum genetic potential of the cow (Kathiravan et al. 2010). Test-day models have recently gained considerable interest. Sire evaluation using a test day model has higher accuracy due to larger number of measurements per daughter than one lactation record in 305day milk yield model (Gupta 2013) and also accounts for short-term environmental factors specific to individual yields. In the current scenario there is need to study the milk yield of different lactation for determining lifetime performance and also to shift from 305-day milk yield models to test day milk yield models for early and accurate genetic evaluation. Hence, present study was done to evaluate the milk yield of Sahiwal cattle in different lactations and factors affecting 305-day milk yield and test day milk yields. The data of first to third lactation 305-day milk yield (305-DMY) and monthly test day milk yields of Sahiwal cattle spread over a period of 52 years (1961–2012) maintained at ICAR-National Dairy Research Institute (NDRI) were used for the present study. The monthly test day records were taken on 6th (TD1), 36th (TD2), 66th (TD3), 96th (TD4), 126th (TD5), 156th (TD6) 186th (TD7), 216th (TD8), 246th (TD9) and 276th day of lactation (TD10). The data were classified according to the season, year of calving, age groups and service period. Seasons namely, winter (December-March), summer (April -June), rainy (July September) and autumn (October-November) were grouped. The year of calving effect was considered and year with no observation and less than 3 records were excluded from present study. Age at calving was grouped into 11 classes (≤ 900 days as first and ≥ 1441 days as last class) The genetic evaluation based on life time production is gaining importance.Longer productive life not only reduces rearing costs per year of productive life but also allows exploitation of maximum genetic potential of the cow (Kathiravan et al. 2010).Test-day models have recently gained considerable interest.Sire evaluation using a test day model has higher accuracy due to larger number of measurements per daughter than one lactation record in 305day milk yield model (Gupta 2013) and also accounts for short-term environmental factors specific to individual yields.In the current scenario there is need to study the milk yield of different lactation for determining lifetime performance and also to shift from 305-day milk yield models to test day milk yield models for early and accurate genetic evaluation.Hence, present study was done to evaluate the milk yield of Sahiwal cattle in different lactations and factors affecting 305-day milk yield and test day milk yields. Short Communications where, Y ijklm , 305-day milk yield/test day milk yield of the m th individual in i th season, j th year, k th age group class and l th service period class; μ , overall population mean; S i , fixed effect of i th season of calving/season of test day recording month; P j , Fixed effect of j th year of calving ; A k , fixed effect of k th age at calving class; D l , fixed effect of l th service period class; and e ijklm , random error, NID (0, σ 2 e ).Lactation 305-day or less milk yield: The least squares mean of first lactation 305-day milk yield (FL305DMY) was similar to milk yield reported by Singh et al. (2005) but higher than the values reported by Debbarma et al. (2010), Dongre (2012) and Gupta (2013) in Sahiwal cattle.The second lactation 305-day milk yield (SL305DMY) was higher than the milk yields reported by Rehman and Khan (2012) and Gupta (2013) in Sahiwal cattle.The third lactation 305-day milk yield (TL305DMY) was higher than those reported by Rehman and Khan (2012) and Gupta (2013) in Sahiwal cattle.The differences in the estimates of 305-DMY reported by researchers could were due to sampling variations or herd to herd differences or differences that have occurred in time depending on the period to which the data pertained and effect considered in the model for least squares analysis. Effect of non-genetic factors on 305-day milk yield Effect of season of calving: The season of calving has highly significant effect (P<0.01) on SL305DMY and nonsignificant effect on FL305DMY and TL305DMY.The cows calved in autumn season had the maximum FL305DMY.On the contrary, the cows calved during summer had the lowest FL305DMY.The non-significant effect of season of calving on FL305DMY was also reported by many workers like Debbarma et al. (2010), Dongre (2012), Mundhe (2012) and Gupta (2013).Contrary to the present study, significant effect of season of calving on FL305DMY was documented by Rehman et al. (2008) Dongre (2012) and Gupta (2013) also found significant effect of period/ year of calving on FL305DMY.However, Mundhe (2012) reported that the effect of period on FL305DMY was not significant.The present finding for SL305DMY and TL305DMYwas in accordance with the reports of Rehman and Khan (2012) and Gupta (2013) in Sahiwal cattle.The differences in 305day milk yield of over the years may be attributed to the differential culling levels on the basis of production, differences in feeding and management practices besides the changing genetic structure of the population. Effect of age group: The age at calving had nonsignificant effect on the first, second and third lactation 305day milk yield.This finding was similar to the results reported by Debbarma et al. (2010), Dongre (2012) and Gupta (2013) for FL305DMY in Sahiwal cattle.Gupta (2013) also find non-significant effect of age group on SL305DMY and TL305DMY in Sahiwal cattle.The nonsignificant effect of age groups may be attributed to uniform management of animals during different part of their life and adequate compensation for various growth stages through management and nutritional supplementation. Effect of service period: Service period has highly significant (P≤0.01)effect on FL305DMY, TL305DMY and significant effect on (P≤0.05) on SL305DMY.Maximum FL305DMY was obtained for animals having service period between 361-390 days and the yield was minimum for less than 60 days service period.The significant effect of service period on FL305DMY was also reported by Rehman et al. (2008) and Dongre (2012) in Sahiwal cattle.The animals with 241-270 days service period has highest SL305DMY and with 61-90 days service period has lowest SL305DMY.In general animals with service period of 120 days or less have lower TL305DMY and animals with service period greater than 240 days has higher TL305DMY.It can be inferred that the animals having shorter service period had lesser 305DMY.It may be due to the fact that early conceivers become dry earlier to facilitate the subsequent calving and the late conceivers may be in milk for a comparatively longer period. Monthly test day milk yields: The least squares means of first lactation monthly test day milk yields (FLTDMY) varied from 5.24±0.15kg to 8.31±0.18kg (Table 1), which was similar to the test day milk yield reported by Debbarma et al.(2010) and Gupta (2013) in Sahiwal cattle.The least squares means of second lactation monthly test day milk yields (SLTDMY) ranged from 5.29±0.16kg to 10.49±0.22 kg and from 5.48±0.19kg to 11.58±0.25 kg (Table 1) for third lactation monthly test day milk yields (TLTDMY).Gupta (2013) reported similar yield for SLTDMY and TLTDMY in Sahiwal cattle. Effect of non-genetic factors on test day milk yield Effect of season of test day recording month: The season of test day recording month has significant effect on all test days except TD1, TD8 and TD10 of first lactation, TD1, TD5,TD6,TD7,TD8 and TD9 of second lactation and TD1, TD5, TD8 and TD10 of third lactation.Debbarma et al.(2010) and Gupta (2013) also reported significant effect of season of calving on majority of the FLTDMY in Sahiwal cattle.Gupta (2013) found statistically highly significant (P<0.01)effect of season on second lactation TD2, TD3 and TD4 and non-significant effect on rest of the SLTDMY and significant effect of season of calving on some TD and non-significant effect of season on some TLTDMY in Sahiwal cattle.In general test day milk yields recorded during winter and summer were comparatively higher than test day milk yields recorded during rainy season and autumn for all lactation. Effect of year of calving: The effect of year of calving was highly significant (P≤0.01) on all FLTDMY.The present findings were similar to the earlier reports of Debbarma et al. (2010) and Gupta (2013) in Sahiwal cattle.The effect of year of calving was significant on all SLTDMY except TD2, TD3, TD4 and TD5 and on all TLTDMY except TD5, TD6 and TD10. Effect of age group: The age at first calving had highly significant (P≤0.01)effect on TD10, significant (P≤0.05)effect on TD2 and TD6 and non-significant effect on rest of the FLTDMY.Debbarma et al. (2010) reported significant (P≤0.05)effect of age at first calving on TD2, TD6 TD10 and Gupta (2013) found significant effect of age at first calving on TD2, TD3 and TD10 Sahiwal cattle.The age at second calving had significant (P≤0.05)effect on TD3 and TD7 and non-significant effect on rest of the SLTDMY.The age at third calving had non-significant effect on all third lactation test days.However, Gupta (2013) found highly significant (P<0.01)effect on TD10, significant (P<0.05)effect on TD9 and non-significant effect on rest of the TLTDMY.There was no definite trend for variation in test day milk yields among different age groups in different lactation. Effect of service period: The service period had highly significant effect (P≤0.01) on TD9 and TD10, significant effect (P≤0.05) on TD2 and TD8 and non-significant effect on rest of the FLTDMY.Dongre (2012) also found variable effect of service period on weekly test day milk yields in Sahiwal cattle.The service period had significant effect (P≤0.05) on TD8 only and non-significant effect on rest of the SLTDMY.In general cows with less than 120 days service period have lower test milk yield and greater than 240 days service period had higher test day milk yield.The service period has highly significant effect (P≤0.01) on TD10, significant (P≤0.05)effect on TD4 and TD9 and nonsignificant effect on rest of the TLTDMY.The cows with service period greater than 150 days has higher TLTDMYcompared to cows with service period less than 150 days.This may be attributed to longer lactation length in these animals. Comparison of 305-day milk yield and test day milk yield in different lactation: The 305-day milk yield as well as test day milk yield showed increasing trend with increase in lactation number.In comparison to first lactation, second lactation has 9.92% higher and third lactation has 19.04% higher 305DMY.There was sharp decline in TD9 and TD10 yield for second and third lactation compared to first lactation (Table 1) indicating higher milk yield persistency for first lactation.First parturition occurs in the process of continuous body and udder development as a result of which primiparous cows have limited milk production capacity compared to pluriparous cows which could be the reason for lower yield in first lactation. SUMMARY The records on first to third lactation test day and 305day milk yields spread over a period of 52 years were collected to examine the milk yield variation in different lactation and factors affecting monthly test-day milk yield and 305-day milk yield in Sahiwal cattle.The 305-day milk yield depicted progressive increase with increasing parity.For all lactations significant effect of year of calving and service period and non-significant effect of age group was seen on 305-day milk yield.For test day milk yield the effect of different factors did not follow any fix pattern.From the study it can be concluded that as the lactation order increased milk yield increased and nongenetic factors influence the 305-day milk yield and test day milk yield in Sahiwal cattle. Indian Journal of Animal Sciences 85 (11): 1267-1269, November 2015/Short communication https://doi.org/10.56093/ijans.v85i11.53317 Debbarma et al. (2010),production in rainy season may be attributed due to the hot and humid climatic conditions and availability of lesser green fodder during the summer season.Effect of year of calving: The year of calving significantly affected (P≤ 0.01) first, second and third lactation 305-day milk yield.Similar, to present findingsSingh et al.(2005),Debbarma et al. (2010), Table 1 . Least squares mean (kg) for test day milk yields and 305-day milk yield in Sahiwal cattle
2023-09-17T15:12:43.465Z
2015-11-06T00:00:00.000
{ "year": 2015, "sha1": "ee3aa0d766e32df8bbc0f8b59b0ba7a5ec687818", "oa_license": "CCBYNCSA", "oa_url": "https://epubs.icar.org.in/index.php/IJAnS/article/download/53317/22538", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "10297d5761d98760b62021a934d52c38ed9975cb", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
233702874
pes2o/s2orc
v3-fos-license
A new database structure for the IHFC Global Heat Flow Database Periodic revisions of the Global Heat Flow Database (GHFD) take place under the auspices of the International Heat Flow Commission (IHFC) of the International Association of Seismology and Physics of the Earth's Interior (IASPEI). A growing number of heat-flow values, advances in scientific methods, digitization, and improvements in database technologies all warrant a revision of the structure of the GHFD that was last amended in 1976. We present a new structure for the GHFD, which will provide a basis for a reassessment and revision of the existing global heat-flow data set. The database fields within the new structure are described in detail to ensure a common understanding of the respective database entries. The new structure of the database takes advantage of today's possibilities for data management. It supports FAIR and open data principles, including interoperability with external data services, and links to DOI and IGSN numbers and other data resources (e.g., world geological map, world stratigraphic system, and International Ocean Drilling Program data). Aligned with this publication, a restructured version of the existing database is published, which provides a starting point for the upcoming collaborative process of data screening, quality control and revision. In parallel, the IHFC will work on criteria for a new quality scheme that will allow future users of the database to evaluate the quality of the collated heat-flow data based on specific criteria. Fuchs et al. – A new data-base structure for the IHFC Global Heat Flow Database. 2 International Journal of Terrestrial Heat Flow and Applied Geothermics. VOL. 4, NO. 1 (2021); P. 01-14. Introduction Studies of Earth's heat flow cover a wide range of scientific and applied aspects, including the planetary energy balance, the driving mechanism of tectonic processes, and the thermodynamic conditions within the interior. Understanding Earth's heat flow is also fundamental for studies about the evolution of hydrocarbon, mineral and geothermal resources, and for planning their exploitation. The International Heat Flow Commission (IHFC; www.ihfc-iugg.org) has been fostering the compilation of the Global Heat Flow Database (GHFD) since 1963 to provide objective, unique and unambiguous heat-flow data. Those compilations comprise heat-flow data from different acquisition methods, including the common borehole and shallow deepsea probe sensing determinations, but also measurements using other novel techniques including those conducted in mines and tunnels. Reflecting the needs and technical capabilities at the time, the IHFC has released several data publications during its lifetime, based on the contemporaneous IHFC database compilation (e.g., Lee and Uyeda, 1965;Simmons and Horai, 1968;Jessop et al., 1976;Global Heat Flow Compilation Group, 2013). Beyond the IHFC frame, Hasterok (2019) and Lucazeau (2019) published more recent heat-flow data compilations. The GHFD provided by Lee (1963), Lee and Uyeda (1965), Lee and Clark (1966), and Simmons and Horai (1968) represented the first printed compilations of heat-flow determinations. The latter reviewed more than 2000 heat-flow observations that were available at that time. Being restricted to printed tables, metadata for each heat-flow location were summarized in a six-digit number representing a code for the geographical region (first number), the geological setting (second number), the type of temperature measurement (third number), the type of thermal-conductivity measurement (fourth number), the type of corrections applied to the heat-flow datum (fifth number), and a quality indication (sixth number). Names of locations were limited to eight characters. Other listed data included the geographical coordinates and elevation, the determined thermal gradient and thermal conductivity values, and the calculated heat-flow density values. In addition, references were given with the last two digits of the year of publication. During the 1970s, as computer systems became more versatile and the number of heat-flow determinations increased, the IHFC initiated a modification of the previous database structure resulting in a first digital database. With the publication of Jessop et al. (1976), the heat-flow data compilation was made available, for the first time, in a 'computer-compatible format' from the World Data Centre. The principal philosophy of the database was to provide the user with all the information necessary to allow assessment of the heat-flow data quality. Therefore, Jessop et al. (1976) introduced additional database fields compared to the entries listed in the tables of Simmons and Horai (1968). However, the compilers of the database needed to extract the desired information from the original publications and condense the content for a database entry of a maximum of 80 characters for each determination (caused by the state of information technology at that time). The authors were aware of the fact that this limitation in characters hampered a complete description of each heat-flow measurement: "The compilers' aim has been to standardize the description as much as possible, and at the same time to mislead the user as little as possible." The basic structure of that database has remained in place until today and provided the foundation for the most recent compilations of global heat-flow data (Global Heat Flow Compilation Group, 2013;Lucazeau, 2019). In 2020, the IHFC initiated a fundamental revision of the GHFD. The process involved a multi-national collaborating project in order to consider the current and future needs of the database while taking advantage of state-of-the-art information technology. The goals were to create an authenticated database containing information on the type and quality heatflow data, and to fulfil the requirements of modern research data infrastructure by including detailed metadata descriptions and database interoperability. To reach these goals, self-organized working groups revised and extended the previous database structure provided in Jessop et al. (1976). Working groups, consisting of terrestrial and marine heat-flow experts from all continents, were established for four parameters that affect heat-flow calculation and interpretation: 1) heat-flow determination methods, 2) metadata and flags, 3) temperature measurements, and 4) thermal rock properties. Intermediate and revised results were presented and discussed among all working-group participants. Based on a common understanding of the database entries, the community discussed different information that is necessary to assess heat-flow quality and uncertainty. These efforts have resulted in a new GHFD structure that is presented here. The new structure will form the basis for all new data entries, as well as for the reassessment of existing data. Background on heat flow, temperature and thermal conductivity Heat flow represents a derivative measure. It depends on the nature, intensity and distribution of subsurface heat sources, thermal rock properties, and the dominant heat transfer mechanism. In general, the sources of heat are related to processes in the Earth's interior, as well as to solar radiation. Heat transfer from higher to lower temperatures occurs through three distinct mechanisms: conduction, convection, and radiation. Convection is often the most relevant mechanism in fluids. In solids, conduction is the dominant heat transport mechanism as long as temperatures do not exceed several hundred degrees Celsius, above which radiation plays an increasingly dominant role. The Earth's lithosphere is solid and exhibits relatively low temperatures in most areas, allowing the general assumption that the heat flow is essentially by conduction. By definition, heat flow q is positive in the direction of decreasing temperature. Where conduction dominates, regional variations in heat flow can be related to changes in the basal heat flux and/or lithological composition of the crust, allowing, e.g., advanced geodynamic interpretations. The q value stated above represents the best estimate of the mean vertical conductive heat flow through the Earth's surface (often called terrestrial surface heat flow). Quite often, the heat flow determined at a certain location is influenced by nearsurface factors and convective heat-transport processes and may therefore include non-steady state, non-vertical, and nonconductive heat transfer components (Figure 1). Possible influences are, e.g., non-vertical heat flow (heat refraction), topographical effects, non-steady state conditions (sedimentation/erosion effects, paleoclimate), additional heat sinks and sources, and convective fluid flow (e.g., Haenel et al., 1988). Because many factors influence the determination of terrestrial surface heat flow, it is important that the database documents the heat-flow determination method, the estimation methods for temperature and thermal conductivity, and any corrections applied to the terrestrial estimate. For a comprehensive overview of techniques and methods on temperature and conductivity measurements, we refer to Haenel et al. (1988), Beardsmore and Cull (2001), Schön (2015), and Palacios et al. (2019). In accordance with Fourier's first equation of heat conduction, the heat flow (q in mW/m²) is proportional to the temperature difference across an interval (temperature gradient in K/km) and the associated average thermal conductivity (λ in W/[m•K]), where z is positive downwards. For the simplified case of one dimensional flow of heat through the Earth's layers and surface (in z direction), this can be expressed by However, if the vertical heat-flow density is somehow distorted or rocks are anisotropic, temperature gradient and conductivity need to be considered as vector and tensor variables, respectively. For heat-flow calculations, the average thermal conductivity must reflect the in-situ conditions of the embedded rock and the natural flow of heat through the continuous interval. As contactless in-situ measurement is difficult to achieve, representative measurements on reasonable sized rock specimens that reflect the compositional variation of the associated heatflow interval should also consider the respective subsurface pressure, temperature, and fluid saturation conditions. Techniques used for the determination of thermal conductivity should be selected and applied according to the sample characteristics (rock type, grain size, texture, sample conditions, expected conductivities, etc.) and their ability to be applied under the required pressure, temperature and fluid conditions. In addition, any contact resistance additionally introduced between sample and applied technique needs to be minimized. Temperature gradients reflecting the background thermal regime should not be affected by transient perturbations like hydraulic flow, drilling or climatic effects, geological (sedimentation/erosion) effects and others. Besides transient effects, structural effects resulting from heat refraction or rapid change in topography can cause local thermal anomalies that need to be considered if the measurements are used for terrestrial heat-flow determinations. In practice, subsurface temperatures are determined in boreholes, mines and tunnels, and in lake or oceanic sediments. A large number of techniques are available to accurately measure rock temperatures for different operational conditions, and/or to correct measurements so that they reflect equilibrium conditions. When free of perturbing effects, the recorded temperatures should allow the computation of interval thermal gradients with an inaccuracy of less than 1%. The new database structure The revision of the current database descriptions and considerations (Jessop et al., 1976;Lucazeau, 2019) resulted in some fundamental modifications that were partly triggered by the development and possibilities of modern database applications, and partly by methodological developments of heat-flow determination since 1976. The key innovation compared to the former heat-flow database structure is the implementation of a parent-child system for heat-flow data determined at each location. Therein, the parent level contains the main location information (e.g., geographical position, and associated metadata). For each location, only one parent entry is possible, containing also the most representative vertical terrestrial heat-flow value q of the site (Figure 1). Each parent entry is associated with at least one but often multiple child entries (child level). Child entries contain heatflow values (qc) with associated conductivity and temperature data, ideally with explicit consideration of conductivity and temperature related perturbations such as diurnal, annual and climatic surface-imposed temperature distortions (including those made below the sea-floor); heat refraction due to conductivity contrasts or anisotropy; convective disturbance or heat redistribution; topographic effects; sedimentation or erosion, and other similar quantifiable disturbances. The consideration and correction of these effects is reported individually for each heat-flow child value using meta-data flags. Multiple child entries for a location result from either determinations obtained over different depth intervals and/or determinations of different age, status, methodological approaches and/or by different authors. Based on the reported child values, and considering additional radiogenic heat production within the overburden where relevant, the q-value of the parent element represents the best estimate of the mean vertical conductive heat flow through the Earth's surface due to sources in the interior of the Earth (Figure 1, right side). q is almost always a subject of interpretation, which might change over time due to advances in processing and understanding, or as more 'child' data become available. In Figure 1, the determination of heat flow in any one depth interval would yield one child entry (specific to the interval) under the location's parent entry. This system allows for a consistent documentation of all of the available site-specific heat-flow values and supporting data, and provides structure for future estimates to be added. It also simplifies the se- lection of the relevant representative location values for research incorporating large data sets into continental or global numerical models. Depending on the applied methodology of heat-flow determination, relevant methodology-dependent database fields are included in the entries for the parent and child level, respectively. For example, heat-flow determinations based on probesensing data, such as for lake or marine (oceanic) measurements (as performed by a temperature or a combined temperature and thermal conductivity sensing heat-flow probe), require different database fields to assess data quality than heatflow determinations based on temperature data collected from greater depth intervals (e.g., from boreholes and mines) and their associated thermal conductivities ( Figure 2). Compared to the previous subdivision of the GHFD into continental and oceanic data, which assigned multiple meanings for some database fields (where data came from borehole/mines at the continents and from heat-flow probes in the oceans, respectively), the new database structure is more flexible. It accommodates, for example, the documentation of International Ocean Discovery Program (IODP) borehole-derived heat-flow data in the marine setting as well as the documentation of heatflow derived from oceanic probe techniques in on-shore lakes (continental setting). The new database structure includes 56 individual fields that hold information related to the heat-flow determinations. Subsets of these fields were aggregated into single fields in the former database to save storage space, but this constraint is now obsolete. For the same reason of saving storage space, the former database sometimes grouped closely located sites under a single item number for continental data, which is also no longer required. Furthermore, the database is no longer limited by character field length. Therefore, classical codes or short names are not carried forward into the new database. Due to the availability of other digital products (like cartographic services, geological maps, stratigraphic classifications, etc.), some database fields can be automatically filled by a computer using map overlays or database queries. Therefore, some fields in the new database structure refer to such services, e.g., digital object identifier (DOI; www.doi.org) or international geo sample number (IGSN; www.geosamples.org). Fields will be auto-filled when users do not provide the respective data (e.g., elevation). Reference formats, linked to a separate heat-flow literature database, should allow the user to easily access the main publications. As well as each main publication, additional publications may also be stored, as well as supplementary references necessary to understand data collection and processing methods. In contrast to the previous database structure, the new structure does not provide specific fields for recording radiogenic heat production measurements. These data are rarely reported and were scarce in the old database (reported for <2% of entries). However, measurements of radiogenic heat production are now considered in the metadata item for terrestrial heat-flow value corrections (i.e., considering the heat production of the overburden, see also below, sections 3.1 and 3.2). Figure 2 -New database structure showing associated data fields for the parent and child level relevant for all entries (bold black), for classical heat-flow determinations based on deeper temperature recordings from borehole and mines (blue italic), for shallow marine probe sensing data (purple), and for data administration (grey). Based on the observation that many of the database fields established by Jessop et al. (1976) held no respective data entries, the new structure also assigns a 'desirability' classification to each field according to its relevance for understanding the quality of the reported heat-flow value; 'mandatory', 'recommended', or 'optional'. This desirability classification emphasizes mandatory fields that delineate minimum requirements for heat-flow values to be entered into the database. The number of mandatory fields depends on the measurement type -18 for data from boreholes and mines, and 15 for data from probe sensing. Recommended fields number 26 for both methods, and greatly assist a full quality assessment of the heatflow value. Optional fields number nine for both methods. In addition, auto-added fields (e.g., continents or oceans from coordinates) and new database fields for administrative organization were introduced. A comprehensive list of all fields, including field desirability classifications and examples of associated data, is included as an Appendix. The new database structure aims to provide all of the relevant information for geothermal and heat-flow researchers to enable individual quality control, data exchange and comparison studies. Fields used for the organization and administration of the database are invisible to general users but are necessary to ensure database integrity and to enable internal data queries. Other types are numerical fields (1 to 8 bytes, containing integer, float and double precision format), string fields with up to 255 characters, and date fields (in the POSIX date format, YYYY-MM-DD, and year, YYYY). Each database field is described in detail in the following subsections. For each database field, six characteristics are listed to describe the field thoroughly: (1) the field name ('name'), (2) the internal field short name ('short name') used for data queries, (3) the field unit ('unit') defining the associated physical S.I. unit of the stored value if applicable, (4) the data type of the data field ('type'), (5) the range of values expected or allowed in the database field ('range'), and (6) a detailed explanation ('description') of the database field. For the sake of clarity, the fields are grouped in four main thematic groups, namely: heat-flow density, metadata and flags, temperature, and thermal conductivity. Fields: Heat-flow density A lot of contextual information is required to understand the status and quality of a reported heat-flow value and its method of computation (see Table 1 and Figs. 1 and 2). The fields reporting the heat-flow value and its uncertainty (entries 1 and 2 in Table 1) are relevant to both parent and child level entries. Other fields are required depending on the associated methodological approach. An informed assessment of the suitability of specific heat-flow data for geothermal and other analyses requires a detailed description of the conditions of data collection and processing. This is further explained in section 3.2 (Metadata and Flags). The Appendix provides an example of the application of this new IHFC database structure to an existing dataset. Heat-flow type and heat-flow transfer mechanism are two new criteria added to the database structure. The heat-flow type is related to the introduced parent-child database structure for reporting a heat-flow determination at a particular site. If the reported heat flow reflects a value for the terrestrial heat flow of the selected location, it is of type 'surface heat flow' (short: q; only parent level), if the reported value reflects the heat-flow density of a certain depth interval at the location, it is of type 'child heat flow' (type = qc; only in child level). By introducing the item 'heat-flow type' a parent-child system of location values (parent: q) and depth-specific interval values (child: qc) is established allowing, for example, depth-dependent geothermal analyses. The second criterion, heat-flow transfer mechanism, allows the classification of a reported heat-flow value according to the dominant heat-transfer process influencing the heat-flow value. Fields: Metadata and Flags This subgroup of database fields hold information relevant for a thorough evaluation of the reported heat-flow values. The subgroup covers a large range of topics and information (Table 2), for example, geographical data to locate a reported heat-flow value and publication data to trace its original source (reference publication). In addition, data fields provide information on the general geological setting and on the application of any instrumental or environmental corrections. Fields: Temperature The measured subsurface temperature and calculated temperature gradients have a first order control on heat-flow determination. In total, eleven database fields are included in the new database structure (Table 3, Figure. 2). Nine of the fields are newly established, although partly reflect previous descriptive codes that will no longer be used. The new fields of measured and corrected temperature gradients allow the reporting of subsequent corrections using newly developed approaches that are more sophisticated. In addition, the methods, correction approaches and shut-in times/relaxation times can be reported separately for the top and bottom depths of the respective heat-flow interval, allowing the proper reporting of differ-ent data origins and methodologies, if relevant, for each interval boundary. Ideally, a reported corrected temperature gradient shall represent the site-specific, unperturbed, terrestrial conductive conditions of the reported heat-flow interval at depth. Fields: Thermal Conductivity Nine specific fields in the database describe the topic thermal conductivity in the context of heat-flow determination ( Table 4). Six of the fields are newly established and partly picked up former applied descriptive codes. Most of the fields in this group are important to understand the quality and status of the reported thermal conductivity value, and are therefore relevant to the evaluation of the quality of the associated heatflow value. Ideally, the reported mean thermal conductivity of an interval (item 47 in Table 4) shall consider the in-situ pressure, temperature and fluid conditions prevailing within the relevant heat-flow interval at depth. Fields: Database administration and auto-added fields The fields in this group are used for database queries and administration. They are auto-generated, and not editable by a general user. The fields are: heat-flow type (parent or child); entry id (unambiguous identity number for each entry), parent id, child id, quality code (from the old database), editor and last-modification date or literature id for the link to the associated literature database. Content fields auto-filled from coordinates and GIS data web services are, for example: continent, country, geographic domain or region, palaeoclimate region, and underwater-feature (oceanic crust region). Summary and outlook The new database structure makes it possible to interconnect the GHFD to other digital data resources, like map data (continents, geology, ocean region), sample data (IGSNs), library services (DOI), etc. The new GHFD structure will also provide a basis for a live plausibility check for newly submitted data. New data relevant to heat-flow determinations may in the future be generated through the interpretation of spatial exploration data and satellite images (e.g. spatial data of bottom surface reflections or other temperature raster data). Such data may be linked to the GHFD as an add-on service in a separate database. The main goal of past editions of the GHFD was to provide a comprehensive compilation of global heatflow data. The new GHFD shall be also the starting point to deliver well documented and reliable heat-flow values, representing the new IHFC database standard. Aligned with this publication, a restructured version of the existing database is published as a data publication (Fuchs et al., 2021). The process of data screening and revision of incomplete, wrong or empty data entries will be an ongoing process and will rely on International Journal of Terrestrial Heat Flow and Applied Geothermics. VOL. 4, NO. 1 (2021); P. 01-14. this new database. In parallel, the IHFC will provide a new quality scheme allowing a user to select appropriate reliable heat-flow values for their specific purpose. T correction method (top) R n/a n/a n/a n/a 44 T correction method (bottom) R n/a n/a n/a n/a 45 Number T recordings R -40 164 52 56 37 Average gradient corrected O K/km n/a n/a n/a n/a 38 Gradient cor. uncertainty O K/km n/a n/a n/a n/a 47 (3) Random or periodic depth sampling (3) Random or periodic depth sampling (3) Random or periodic depth sampling (1) 56 IGSN O n/a n/a n/a n/a
2021-04-28T01:52:09.847Z
2021-03-22T00:00:00.000
{ "year": 2021, "sha1": "d7dfe9e6575a02ce4b8057390e8aa2a1a757c690", "oa_license": "CCBYNCSA", "oa_url": "http://ijthfa.com/index.php/journal/article/download/62/46", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d7dfe9e6575a02ce4b8057390e8aa2a1a757c690", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Physics" ], "extfieldsofstudy": [] }
230574213
pes2o/s2orc
v3-fos-license
Nomogram with a novel microenvironment signature is systematically constructed and validated to predict the survival rate of glioma patients Glioma accounts for the highest proportion of primary intracranial malignant tumors. Microenvironment enormously influences the process of glioma progression. Our study is to establish an individualized prognostic nomogram for glioma patients with microenvironment signature. Glioma samples of Chinese Glioma Genome Atlas (CGGA) were grouped by the immune and stromal score based on ESTIMATE algorithm. Microenvironment-related genes (MRGs) in glioma were analyzed by R. To determine the best prognostic correlation genes, univariate and multivariate Cox regression analysis were used to analyze MRGs. Use the selected genes (CHI3L1, SOCS3, SLC47A2, COL3A1, SRPX2 and SERPINA3), we established the prognostic risk score model (microenvironment signature) and validated it. Gene Set Enrichment Analysis (GSEA) showed that the high-risk group was mainly enriched in immune and stromal function KEGG pathways. Finally, the nomogram was constructed and evaluated. The receiver operating characteristic (ROC) curve, Calibration plots and decision curve analysis (DCA) of training and validation set indicated the excellent predictive performance of nomogram. In conclusion, the 6-gene microenvironment signature can not only provide directions for the basic research of glioma, but also can be included as an independent prognostic index in nomogram for individual prediction to guide clinical treatment. Introdution In primary intracranial malignant tumors, the proportion of gliomas can be as high as 81% 1 . Although a lot of achievements have been made in the clinical and molecular research of glioma, there are significant deficiencies in the study on the prognostic biomarkers and a more accurate and reliable prognostic index of glioma patients is also needed. Tumor cell internal genes play essential roles in the evolution of glioma 2,3 . At the same time, tumor microenvironment had vital effects on gene expression in tumor tissues [4][5][6][7] . Tumor microenvironment contains two main non-tumor components: immune cells and stromal cells, which are crucial for diagnosis and prognosis of tumors 8,9 . Many studies showed that some microenvironment-related genes (MRGs) play essential roles in glioma in many signal pathways 10,11 . Therefore, MRGs are expected to be clinical prognostic indicators and therapeutic targets for glioma. Thanks to the continuous development of genome sequencing technologies, several glioma molecular biomarkers have been discovered. There have been many studies on 1p/19q codeletion, tumor protein 53 (TP53) mutations, isocitrate dehydrogenase (IDH) mutation and so on 12,13 . Emerging research suggests that certain single genes do not fully represent tumor characteristics, but global gene expression pattern of multigene could be used as a special molecular biological marker for subgroup classification, early diagnosis, treatment targeting , prognosis prediction and so on in glioma 14,15 . However, there is little research on the global expression pattern based on MRGs in glioma. Recently, a newly proposed computational algorithm, known as "Estimation of Stromal and Immune cells in Malignant Tumor tissues using Expression data (ESTIMATE)" was developed 9 and successfully brought to calculate the degree of infiltration of non-tumor cells in several malignant tumors like prostate cancer 16 , breast cancer 17 , and colon cancer 18 . Therefore, in this study, we use the ESTIMATE algorithm to evaluate the RNA sequencing data of glioma samples, construct and validate a microenvironment signature that can predict prognosis and provide research directions for therapeutic targets in glioma. Moreover, combining clinical parameters and the microenvironment signature, we established an innovative and promising predictive nomogram model, which has more accurate predictive ability for glioma. Identification of MRGs and enrichment analysis Using the immune or stromal median score as the cut-off, we divided the 693 glioma cases into high/low immune or stromal score groups. The K-M survival curve [ Figure S1] showed that, whether in immune (p = 0.281) or stromal (p= 0.114) groups, the median overall survival of patients with high scores was lower than that of patients with low scores, although they were not statistically significant. We compared their RNA-seq data based on the high/low immune or stromal score group. The heatmaps [ Figure 1A] showed that the gene expression profiles of the cases were different. In Identification of prognosis-related MRGs We excluded patients with loss of age and survival time in cohort 1 and performed univariate Cox regression analysis on the 318 MRGs. The significant prognostic genes (P < 0.05) were arranged in ascending order, and top 10 genes related to prognosis were identified and Figure 3B]. We treat cohort 2 as an external validation cohort and verify it in the same way. K-M curve also demonstrates that OS of patients in high-risk group was markedly worse (P < 0.001) [ Figure 3C]. And microenvironment signature showed a favorable predictive ability to predict the OS rates in 1, 2and 3 years with the AUC values in the 0.762, 0.82 and 0.826, respectively [ Figure 3D]. Figure3 . Survival analysis and prognostic evaluation of the microenvironment signature in glioma. K-M survival curve of the risk score for patient OS in the training (A) and validation set (C). The OS of patients in high risk group was significantly worse than low risk group. The prognostic evaluation of the microenvironment signature displayed by the ROC curve for predicting the 1,2 and 3-year OS rates in the training (B) and validation set (D). Construct and verify the nomogram Based on the training cohort, we established a prognostic nomogram, which can predict the 1, Among all the areas formed by the curves and "None" and "All", the nomogram curve is the largest, which showed that the prediction ability of nomogram model is better than that of single parameter model. Discussion The tumor microenvironment of glioma plays an essential role in the development of glioma. The change of microenvironment related genes can affect the expression of tumor tissue, and then affect the clinical outcome 4,5 . Therefore, MRGs are promising prognostic indicators and treatments target for glioma. With the progress of high-throughput sequencing technology, more and more biomarkers related to the survival of glioma patients have been identified 12,13 . Many global gene expression patterns can be used in prognosis prediction, risk stratification and treatment guidance of glioma 20,21 . However, there is still a lot of room for us to study the global expression pattern based on MRGs in glioma. ESTIMATE is a bioinformatics tool for predicting non-tumor cell infiltration. It can score each sample by evaluating the particular gene expression feature of stromal and immune cells 9 . In this study, we first use ESTIMATE to grade the cohort 1 samples. Taking the median score as the dividing value, the samples were partitioned for high/low immune or stromal score group. Then, we regard the 318 intersection genes of DEGs between the immune and stromal group as MRGs. Finally, univariate and multivariate Cox regression analysis confirmed that 6 up-regulated genes (CHI3L1, SOCS3, SLC47A2, COL3A1, SRPX2 and SERPINA3) were significantly related to prognosis. TCGA-based GEPIA also proved that they are up-regulated in glioma compared to normal tissues. GSEA showed that gene sets of the high-risk group in the cohort 1 were chiefly enriched in immune and stromal related KEGG pathways, which suggested that glioma with high expression of 6 kinds of MRGs can affect tumor progression through related pathways. CHI3L1, also known as YKL-40, is a pro-inflammatory factor that can be used as a biomarker of glioma and brain injury. High levels expression of YKL-40 in human gliomas can activate AKT 22 . Angiogenesis and malignancy of glioblastoma can be synergistically inhibited by Anti-YKL-40 antibody and ionizing irradiation 23 . SOCS3 is related to tumor progression and therapeutic response in glioma, and it crucial for glioma to acquire anti-radioactivity. Hypermethylation of SOCS3 promoter is an important marker of poor prognosis in glioma 24,25 . The expression of SLC47A2 can be cis-regulated in renal cell carcinoma 26 . Type III collagen is an important signal molecule to promote wound healing, and COL3A1 encodes its alpha 1 chain 27 . SRPX2 enhances EMT process and promotes glioma metastasis through MAPK signaling pathway 28 . The upregulation of SERPINA3 might reshape the extracellular tissue matrix and promote the invasion of glioma, and it was significantly related to the poor survival of patients 29 . In summary, CHI3L1, SOCS3, SRPX2 and SERPINA3 were significantly associated with the evolution of glioma. However, SLC47A2 and COL3A1 had not been studied in glioma. These MRGs can be used not only as independent prognostic biomarkers but also as potential targets to guide the treatment of glioma. Then, based on the expression of 6 MRGs, we developed and validated a novel risk score model (microenvironment signature) and separated glioma patients into low/high-risk group based on their risk score. Subsequently, the K-M curve showed that the high-risk group had an appreciably poorer prognosis. Therefore, glioma patients with high-risk scores should receive more attention and adopt more aggressive individualized medical strategies. At the same time, Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 December 2020 doi:10.20944/preprints202012.0404.v1 they need to be closely followed up to detect recurrence. Nomogram can intuitively show the prognosis, which makes it widely used in clinical practice 30 . In this study, we constructed and verified a nomogram based on microenvironment signature, IDH mutation status and age. As far as we know, the nomogram is an innovative combination of microenvironment signature and clinical parameters, which can individually and more precisely predict the survival rate of glioma patients. The ROC curve showed that the nomogram has an excellent ability to predict the OS rate of 1-, 2-and 3-years. The calibration curve showed that the prediction of the nomogram is in outstanding agreement with the actual observation, and the DCA curve showed that the nomogram model was better than the single parameter model. Combining the results of these three indicators, the innovative and promising nomogram demonstrates excellent prediction ability. There is no denying that there are still some deficiencies in our research. First, the data we download from CGGA is incomplete and limited. Some clinical information of some patients is missing and some clinical parameters, such as operation method, tumor location, tumor size, etc. were not included in the study. Second, a limitation of this prediction model lies in its retrospective property, so it needs to be further verified in future clinical trials. Database From the Chinese Glioma Genome Atlas (CGGA, http://www.cgga.org.cn/) databases, we download clinical information and RNA sequencing data of glioma patients. Cohort 1 (mRNAseq_693) 31 and cohort 2 (mRNAseq_325) 32 were selected as training set and validation set respectively. Figure 6 showed the schematic diagram for constructing the nomogram. Through the evaluation of ESTIMATE algorithm, each sample was calculated to get its own immune and stromal score. Taking the median score of immune or stromal score as the dividing point, cohort 1 was partitioned for high/low immune or stromal score group. We screened the differentially expressed genes (DEGs) in immune or stromal score group by edgeR 33 Construct and evaluate the prognostic risk score model of MRGs First of all, univariate Cox regression analysis was conducted to the MRGs in training cohort 1 by using survival software package in R 3.6.1. Genes with P < 0.05 were deemed as statistical significance to overall survival (OS) of glioma patients 36 . Then the first 10 genes with the lowest P-value were analyzed by multivariate Cox regression analysis. After the analysis we used the selected genes as the genes related to the optimal prognosis and established a prognostic risk score model to predict OS 37 . The risk score was obtained according to the following formula: Where βi and xi are the coefficient and relative expression value of each selected gene, respectively 38 , and each patient could get a prognostic risk score according to this formula. According to their median risk score, glioma patients were divided into low/ high-risk group. Next, we constructed the Kaplan-Meier (K-M) survival curve of low/ high-risk group, and the survival difference between the two groups was evaluated by two-sided log-rank test. Construction and validation of the nomogram We combined the MRGs-based prognostic model (microenvironment signature) with other clinicopathological parameters of glioma patients for univariate and multivariate Cox proportional hazard regression analysis in the cohort 1 and cohort 2. After the analyses, we screened out all independent prognostic factors and used the rms R package (https://cran.r-project.org/web/packages/rms/) to construct a nomogram of these independent prognostic factors to evaluate the probability of 1, 2 and 3-year OS in cohort 1 glioma patients 14 . The discriminant ability of nomogram was graphically evaluated by using C-index, the AUC value, Calibration plots and decision curve analysis (DCA) 39,41,42 . Finally, cohort 2 was used as an external verification of the prognostic nomogram. All analyses were performed with R, and P < 0.05 was deemed to be statistically significant. Hazard ratios (HRs) and 95% confidence intervals (CIs) were also stated. Conclusion In this study, a promising nomogram contains novel microenvironment signature was constructed and validated for glioma individual prognostic assessment. Further bioinformatics analysis of these MRGs and microenvironment will help to clarify its possible survival mechanism. Next, this model will be further verified in clinical trials and is likely to be translated into meaningful practice to guide the individualized treatment of glioma patients.
2020-12-24T09:09:59.878Z
2020-12-16T00:00:00.000
{ "year": 2020, "sha1": "b7749288196337db7916ce50fcc338daf9600736", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/202012.0404/v1/download", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "89f2cace8828867c93adbfacde537f240ab88be2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258336758
pes2o/s2orc
v3-fos-license
Bridging the educational gap in terms of digital competences between healthcare institutions’ demands and professionals’ needs Background Healthcare professionals with insufficient digital competence can be detrimental to patient safety and increase the incidence of errors. In order to guarantee proper care, healthcare organizations should provide opportunities to learn how to use technology, especially for those professionals who have not received training about this topic during their undergraduate studies. Objective This exploratory study aimed to conduct surveys among Spanish healthcare professionals to determine whether their organisations had trained them in the use of healthcare technology and the areas where most emphasis was placed. Methods 1624 Spanish healthcare professionals responded to an ad hoc online survey 7 questions related to the digital skill training offered by the healthcare organisations they work for. Results Nurses were the most widely represented group, making up 58.29% of the total, followed by physicians namely 26.49%. Only 20% of the nurses surveyed had received some training from their institution related to healthcare technology. According to the participants’ responses, physicians received significantly more training in this area than nurses. Training related to database searching for research purposes or computer management followed the same trend. Nurses also received less training than physicians in this area. 32% of physicians and nurses paid for their own training if they did not receive any training from institutions. Conclusions Nurses receive less training, on topics such as database searching or management, from the healthcare centres and hospitals where they work. Moreover, they also have fewer research and digital skills. Both of these factors may lead to deficits in their care activities, and have adverse effects on patients. Not to mention fewer opportunities for professional progress. Introduction In order to reach patients, save costs and streamline procedures, various public and private organisations advocate developing and implementing digital health systems, or ehealth, in hospitals and healthcare centres among healthcare professionals [1][2][3]. However, few healthcare systems have committed to educating, training and updating their healthcare professionals as for these digital competences. Similarly, very few healthcare professionals can apply such competences, even when given institutional support [4]. The Committee on Digital Skills for Healthcare Professionals concluded that more than 80% of healthcare professionals had insufficient or inadequate training in ehealth or mhealth (digital health mediated by mobile technology) [5]. Equivalently, WHO Atlas of National eHealth profiles [6] placed Spain in a medium-low level in terms of eHealth capacity building for healthcare professionals. Does this mean that these professionals lack sufficient digital skills in order to recommend these resources to patients or, that, they do not receive enough institutional support from their organisations to train and use them in their professional lives? When it comes to nurses, for example, the degree syllabus that they follow in Spain does not include subjects that cover all the required areas to be digitally "competent", as recommended by various international organisations and related publications [7][8][9][10]. It is important to remember that insufficient digital competence by healthcare professionals can be detrimental to patient safety and increase the incidence of errors [4]. In fact, some studies have reported errors of up to 35% related to digital medical prescriptions due to unfamiliarity with the software or a lack of digital skills [11]. In addition, there is evidence that nurses' technological skills influence the frequency of their technology use, i.e., the better their skills, the more they are used [4]. However, other research shows that the motivation to learn and convey is not always directly related to the training received, but it is also influenced by other factors such as their work climate and institutional support [12]. Consolidating their learning requires opportunities to apply what has been learned in the professional environment [13] and this is where healthcare institutions play an important role [14]. Therefore, healthcare organisations are responsible for providing sufficient resources, equipment, and space for the use of technology, as well as providing healthcare professionals with the time and opportunities to learn how to use them [4], especially for those nurses who have not received training in this area during their undergraduate studies [15]. The aim of this exploratory study was, thus, to conduct a survey among Spanish healthcare professionals to find out whether their healthcare organisations (hospitals, health centres, and other services) train them in the use of healthcare technology and to identify the areas where most emphasis is placed. We were also interested in identifying any particular differences in terms of training between professional categories or areas. Our preliminary assumption was that few healthcare professionals currently receive training about digital skills from their organisations in Spain. Materials and methods An online survey was launched, all types of healthcare professionals working in Spain could respond, in order to obtain information on the digital skills training they had received from their healthcare organisations and institutions. Responses were accepted from physicians, nurses, midwives, physiotherapists, auxiliary nursing care technicians (TCAE), pharmacists, psychologists, health emergency technicians (TES) and others. The ad hoc questionnaire was developed, revised and agreed upon by an expert panel composed of three researchers. These three experts belong to multidisciplinary fields; health, technology and engineering, and helped to define the questions asked as well as making them more understandable. This questionnaire is based on the conclusions of Konttila [4] and Kaihlanen [15]: it is healthcare organisations' duty to ensure the digital literacy of their professionals. The survey only included seven questions from two areas: professional data and training received. In the former, the participants could add categories other than those offered on the list. All questions were compulsory and therefore no questions were left unanswered. The estimated time taken when filling out the questionnaire was 4 min. The complete survey can be seen in Table 1. The inclusion criteria for participation in the survey was to be a healthcare professional who is currently working. Therefore, responses from retired or unemployed professionals, students or administrative staff were excluded. As an introduction before the survey, the text regarding the study purpose appears, where a reference about the approval of the ethics committee, and how data from the survey was going to be handled is also made. After acknowledging and accepting that, the healthcare professional was able to carry out the survey. We used the following formula to calculate the necessary sample size to estimate a population proportion (p) of a large population with 95% confidence and a margin of error no larger than e=+/-5% for the most uncertain case (the worst-case scenario) p = 50%: N = z^2*p*(1-p)/ e^2 [16]. Based on this formula choice, 384 responses were needed. This questionnaire was developed on Google Forms, which stores the responses given and facilitates their analysis. The participants clicked on a link for access and were taken directly to the survey. Registration or personal details were not required. In order to reach different types of healthcare professionals, the survey was sent via Instagram, Facebook, Twitter, and LinkedIn. The aim was to reach different profiles in terms of age, gender, digital skills, profession, etc. The only requirement was to be a working healthcare professional. The survey was active online from 14th July to 19th October 2021. Over these three months, frequent reminders were sent through the social networks mentioned above. Ethical approval to conduct this survey was obtained from the Research Ethics Committee of the Polytechnic University of Valencia, Spain (P4_25_07_18). No personal data was collected. Participation was free of charge and completely voluntary. Different tests were applied to check whether the results were statistically significant (P < 0.05). Most variables were dichotomous. We used the chi-square as the test statistic with the corresponding degrees of freedom, depending on the dimensions of the contingency table. Results A digital media survey was conducted to assess whether health institutions currently offer nurses and other health professionals training as well as what content is provided. The questions on training received were answered Yes or No, although the option "Other" was included. During this period, 1624 responses were received, and 80 of them were eliminated because they did not meet the necessary criteria, mentioned above. From the 1544 accepted responses, 900 were obtained from nurses, 409 from physicians, 56 from pharmacists, 36 from physiotherapists, 23 from midwives, 11 from psychologists, 49 from TCAE (auxiliary care technicians), 28 from other technicians, 10 from occupational therapists and the remaining 22 from different profiles (biologists, nutritionists, opticians, etc.). Figure 1 shows a graphical distribution of the participants' profiles. The most representative group was nurses, 58.29% of the total, followed by physicians 26.49%, pharmacists, and TCAE, 3.62%, and 3.17%. According to the Spanish National Institute of Statistics [17], in 2020 (data published in 2021), there were 276,191 registered physicians and 325,018 registered nurses (45.93% physicians compared to 54.07% nurses). The necessary sample would be 384 surveys carried out by these two groups (confidence level 95%, 5% margin of error), according to the calculations mentioned above in the Methodology section. We can therefore state that the number of responses obtained is representative of both, physicians and nurses. Unlike other healthcare professionals, who participated less than doctors and nurses, namely, and therefore are under-represented in this survey. To the question Do you currently work in…, nurses answered that 581 of them, (64.55%) work in hospital settings, 169 in Primary healthcare centres (18.77). 43 nurses work in universities (4.77%), 26 in out-of-hospital emergencies, 22 in both nursing homes and social-health centres. The rest work in other centres or units such as public healthcare, mental healthcare, hemotherapy, the private sector, mutual insurance companies, etc. Table 2 shows a graphical distribution of nurses and physicians depending on their work area. To the question Do you currently work in… the physicians answered that 60.39% of them work in a hospital setting whereas 25.67% work in primary care centres. 15 of the participants in the survey indicated that they work in the private sector (3.67%), 12 in a university setting (2.93%), 9 in both residences and social-health centres (2.20%), and 9 in out-of-hospital emergencies (2.20%). The rest work in other centres or units such as public healthcare, hemotherapy, mutual insurance companies, etc. We can see a similar pattern for physicians and nurses in hospitals and primary healthcare. Only 1% of those working in the management area responded. In response to the first question in the Training received from your company or organisation section, have you received any training, in recent years, related to the use of technology in the healthcare field? 37.88% of the professionals received training in this area. Table 3 shows detailed information classified by professional profile. It can be seen from this table ( Table 3) that only pharmacists and the group of other professional profiles had a higher number of people in the category "yes, I have received training". By isolating the physicians and nurses' answers (1309 responses) for comparative purposes, we find that more than 60% of them have not received any training in this area (61.57%). The difference between the training received by physicians and nurses is statistically significant (p = 0.00002) Therefore, physicians receive more training in this area than nurses. However, comparing primary care and hospital nurses and physicians, there are hardly any differences between nurses (65.68% do not receive training in primary care compared to 66.78% who do not receive it in hospitals). There is a difference between physicians working in primary healthcare and in hospitals (58.09% do not receive training in primary healthcare compared to 49.39% in hospitals). Moreover, more hospital physicians have received training than those who have not (50.60% vs. 49.39%). This data can be seen in detail in Table 4. Answering question 2, In your company or organisation, have you received any training in recent years related to creating healthcare content for social networks, videos, videoconferences, etc.? Only 14.44% of the professionals received training in this area. Table 5 shows the training received in this field, categorised by professional profiles. Isolating physicians and nurses' answers (1309 responses), for comparative purposes, we find that almost 85% have not received training in creating digital content, the use of videos or social networks. The difference between the training received by physicians and nurses is not statistically significant in this case (P = 0.8689). Therefore, both groups receive little training. Making the same comparison between physicians and nurses with question 3; From your company or organisation, have you received any training related to database searches in recent years? We find that 64.47% have not received any training in this area from their organisations for research (844 responses), with a statistically significant difference in favour of the training received by physicians compared to what was received by nurses (P = 0.0000). This data and comparison by workplace can be seen in Table 6. Primary care physicians received more training in these subjects than their hospital counterparts (55.24% vs. 47.37%). This trend can also be found in primary care and hospital nurses (33.73% vs. 26.86%). If we analyse the answers to question 4, including only nurses and physicians; "In your company or organisation, have you received any training related to the use of computers in recent years ", we can see that 59.82% of nurses and physicians have not received any training related to their use at the workplace. Again, the difference between the two groups is statistically significant. Consequently, physicians have received more training than nurses in computer management (P = 0.0040). Comparing primary healthcare and hospitals the differences are less relevant than in other questions. However, it is clear that professionals working in hospitals have received somewhat more training in computer management than those in primary health care (see Table 7). Finally, in; Have you paid for any of the courses mentioned above yourself? (Question 5). Comparing data obtained from physicians and nurses' responses on a whole, we found that out of the 1309 doctors and nurses who responded to the survey, 427 paid for their own training on these matters (32.62%), with no statistically significant differences between the two groups (P = 0.1905). So, it can be said that both, physicians and nurses, pay for their own training equally (see Fig. 2). If we compare all the professional categories included in the survey, we find that psychologists, pharmacists, and occupational therapists invest the most in their own digital skills training whereas physicians and TCAEs have paid the least. However, a small number of professionals are in these categories in our sample and further studies would be necessary to confirm or debunk this. In total, 512 healthcare professionals (33.16%) said they had financed their own training (see Table 8). Discussion This analysis aimed to depict the current situation of digital skills training offered by healthcare institutions to healthcare professionals. It was carried out with the participation of different professional profiles, most of whom were physicians and nurses. Our initial hypothesis was confirmed after analysing the data obtained. Thus, the number of healthcare professionals currently receiving training in digital skills from their organisations is low. Regarding those who receive none, physicians receive more training than other professionals like nurses. The fact that nurses receive less training in e-skills and technology management, may affect the quality of their healthcare, given that nurses need these skills to provide safe patient care [15,18]. This trend, less training for nurses, is observed in every question, except for Question 2 (creation of digital health content), where there is no difference between physicians and nurses, only 14% have received training in this area. However, it is essential to point out that the use of videos for educational and informative purposes in health can be motivating and useful for patients, [19,20]. It would, thus, be advisable to include this type of content in courses and training given by healthcare organisations, especially in the current pandemic where the role of professionals as mediators in filtering reliable information is more important than ever [21]. Regarding the workplace, there are no differences in training between primary care and hospital nurses, conversely, there are differences between physicians in primary care and in hospital settings. Hospital-based physicians receive the most technology and digital competence training, which could be related to the belief regarding primary care physicians not requiring technology in their clinical practice. However, there are numerous cases where using technology in the primary care setting can be implemented, such as telemonitoring patients, coordination with other clinical units, filtering relevant information, distributing quality information to the general population, etc. [22,23]. Regarding the investigation, there is a large gap between database search training for nurses offered by healthcare institutions compared to that offered to physicians. Information seeking is a key sub-skill within digital competence that helps to locate quality information and use it responsibly. This can lead to a huge disparity in nursing research and reduce the likelihood of positive outcomes for patients and the healthcare system [24][25][26][27][28]. For evidence-based practice to become a reality, involving all healthcare professionals is a priority, for this reason, they must have the necessary competences [29,30]. Similarly, to make digital health a reality and apply it in all healthcare areas, involving all healthcare professionals in training and updating programs regularly is a must [31]. Finally, it should be noted that more than 33% of the healthcare professionals surveyed paid for their own digital-related training, showing a high level of interest on their behalf, moreover, healthcare organisations are not only failing to meet the needs of their professionals but of society, especially in pandemic times, when training in digital and technological skills has become a priority [32]. Limitations One of this study's limitations is the fact that participants were recruited on social networks. Having to fill in an online questionnaire could reduce the participation by people with few digital skills. The creation of this questionnaire specifically for this study should also be noted as a limitation. However, efforts have been made to achieve a sizable sample to reduce possible biases. Conclusions The aim of this study was to find out whether the basic training deficiencies in digital competencies for nurses in their undergraduate training were compensated for by their employers. However, as we have seen, Spanish healthcare institutions do not train 100% of their professionals in digital competences or in the use of technology to empower patients, etc., and there is still a lot of room for improvement. Very few healthcare professionals receive training in higher-level competences such as creating video resources or others, which hinders their applicability in clinical practice. Physicians receive most training in this area, although the number is still limited. It is important to remember that the system is multidisciplinary and requires all the agents involved to have sufficient knowledge to guarantee quality care based on the best scientific evidence. Nurses receive less training than physicians from their healthcare centres and hospitals in research and technology and, therefore, have fewer research and digital skills, which may lead to deficits in their practice with negative effects on patients as well as fewer opportunities for professional growth. Recommendations for the future As calls to action, we consider that: 1. Training for all professional profiles should be reinforced by institutions, considering that the digital competence of healthcare workers is an important asset for improving the population's health. 2. Institutions must strengthen the research competencies of nurses through lifelong learning, monitoring and ongoing support. Nursing research can improve the quality of patient care and the professional development of nurses in their discipline. Funding This research received no external funding. Data availability The datasets generated and/or analysed during the current study are not publicly available but are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Conceptualization and conduct of this study were based on the principles of the Declaration of Helsinki. Participation in the survey was voluntary, no personal data was collected, and anonymity was always maintained. All potential participants received written information on the study (reason for the study, objective, processes, data protection), and had the opportunity to contact the investigators in the event of questions at any time during the study. Informed consent to participate was assumed by individuals filling out the questionnaire and had to be confirmed (by ticking a box) at the beginning of the questionnaire. Ethical approval to conduct this survey was obtained
2023-04-27T14:18:57.701Z
2023-04-27T00:00:00.000
{ "year": 2023, "sha1": "991f9c678b94f3e4fcf2cf13e0e25ceef16be32b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "991f9c678b94f3e4fcf2cf13e0e25ceef16be32b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229392607
pes2o/s2orc
v3-fos-license
HOAX IN SOCIAL MEDIA AND IT’S THREATS TO ISLAMIC MODERATION IN INDONESIA Hoax is a human problem in this era of information. The presence of hoaxes causes information consumers to find it difficult to distinguish between true or false information, especially those that spread on social media. The main problem in this research is how hoaxes can threaten religious moderation in Indonesia. This study aims to analyze how hoax information on social media threatens the moderation of Islam in Indonesia. This research uses the library research method. The data were obtained from relevant library data sources and analyzed using a qualitative approach. The findings show that many hoaxes are conveyed along with religious and political information. Hoax on the political aspect aims to bring down political opponents or the government. In the religious aspect, hoaxes are used to attack opposing religious beliefs or schools. Hoaxes on these two aspects, especially religion, have the potential to divide people and destroy religious moderation in society. This research is expected to contribute to the study of communication, especially media and information. A. Introduction One of the needs of modern humans today is information. They need any information about their life. Even the level of dependence on information is a distinctive marker for modern human existence. This is what drives the growth of mass media in this modern era, be it print media or even electronic media. Even in the internet era, mass media variants have also developed with the presence of online media. The presence of the internet has influenced the evolutionary process of modern human communication and interaction. 1 Various information can be found in the mass media to answer the human need for information. The large number of mass media present today provides consumers with choices for a variety of information. Political, economic, cultural, sports, religious and various other information are mass media content that is often found and fills the information space that humans need in this information age. The presence of social media in modern human life has played a role in supplying fast information for today's information needs. Social media with its flexible character, easily accessible to anyone, anytime and anywhere, has played its influence as a source of information for the millennial generation and has played a role in shaping the character of their lives. Social media enables the implementation of interaction, communication and collaboration among users effectively, quickly, precisely and relatively cheaply. 2 All kinds of information scattered on social media further strengthen the character of modern humans who are very dependent on the existence of information. Whenever and wherever they continue to get the supply of information they need. However, it is unfortunate that social media cannot be completely relied upon as a source of information because invalid information was found but spreads so fast. This invalid information is not unintentional. On the contrary, this is something that is deliberately formed and disguised in various ways as if it were truth in order to achieve the purpose of its maker. This invalid information is popularly known as hoax. Social media has indeed become a place for hoax information about various things to grow. The character of social media where anyone can construct information about anything without any control regarding its validity then spread on social media so that hoax information appears more often on social media. Hoaxes are news or information that contains things that are not certain or that are really not facts. 3 In hoax information, facts are twisted, the truth is hidden. 4 Hoax as false information that is deliberately formed is actually made for certain benefits such as political gain. So, in moments related to politics, hoaxes can easily be found on social media that aim to attack or bring down political opponents. Outside the political context, hoax information on social media can be linked to religious issues. This is mainly due to the diversity of religious understanding in society, so that claims often arise to favor certain religious understandings and of course drop other religious understandings. Hoax information related to religious issues actually attracts the attention of social media users because it is usually peppered with religious arguments so that it is highly trusted by social media users. This phenomenon is actually very dangerous in the context of building moderate diversity. Hoax information on social media actually divides the community into their respective religious groups and brings down one another. How dangerous hoax information circulating on social media is against the moderation of Islam, especially in the context of Indonesian society, will be the main discussion in this article. The Evolution of Hoaxes Information: Theoretical and Practical In general, hoaxes are understood as fake news. Mc Dougall (1958) defines a hoax "deliberately concocted untruth made to masquerade truth". 5 The word hoax when traced from the history of the origin of the word was first popularly used in the mid to late 18th century. Hoaxes have a tendency to deceive the public. Hoax has the characteristics of deceiving a wide, popular and massive audience. 6 The word hoax has actually been around for hundreds of years. Around 1808 7 first appeared the term hoax in English. Written in a book by Linda Walsh entitled Sins Against Science. Hoax also comes from the words of the ancient magicians "Hocus Pocus", Latin for "Hoc est corpus", witches used it as a weapon to trick others with their own words which turned out to be deceptive. The description of hoax, which means a hoax, is also found in a book called Candle in The Dark by Thomas Ady in 1965. Around 2006 the use of the term hoax became popular, obtained from a film called Hoax, starring Richard Gere and directed by Lasse Halstorm. 8 The presence of hoaxes in the public sphere, especially in the mass media, is actually not a new thing, even though this phenomenon has only emerged in the era of the current flood of information. At first hoax news was used by some people as a joke, now it has caused unrest. Hoax news or fake news has spread widely and has had a negative impact. Therefore, with the hope that we will not easily accept all the news that is circulating, especially about news which contains things that are not good, do not make sense and the source of the news is unclear. It should be underlined, hoax news spreads easily in a short time, because most of the individuals themselves also spread the news without knowing the truth. 9 The rapid dissemination of information without heeding the ethics of news in online media makes it difficult for readers to distinguish which information is true and which is falsified on Facebook, WhatsApp, Line, and massive instant message information for spreading fake news or hoaxes. 10 On social media, various types of hoaxes can be found, including: fake news, click bait (trap link), confirmation bias, misinformation, satire, post-truth and propaganda. 11 Hoaxes have undergone an evolution from what was originally just a joke, but then turned into dangerous information. Hoax is a force that can be used to form public opinion. And because the purpose is evil, the damaging effects of hoaxes are especially dangerous. In this modern era, hoax information is even deliberately made for the purpose of attacking certain parties. This was done, among other things, to take political advantage of the hoax information. In fact, sometimes this hoax information becomes a business field that generates a lot of money because of the many political moments that have resulted in the birth of many competitions for political power. Hoaxes are used to bring down political opponents by spreading negative issues so that voters will turn away from them. The use of hoaxes for certain interests is greatly assisted by the attitude of the community who is not critical in managing the information it receives. Hoaxes will tend to be accepted as truth without any action to clarify whether the information received has the truth or not. In many cases, hoax information has resulted in fatal chaos. Without clarifying, the parties who feel aggrieved take action which results in material damage and loss of human life. The phenomenon of this destructive hoax information is very dangerous if it continues to grow in society. Social Media and Hoax Information There are many questions about what social media really means. In fact, there is no single or fixed definition or definition of social media. The notion of social media generally describes the social media process itself which emphasizes the process of interaction between individuals by creating, sharing, exchanging, and modifying ideas or ideas in the form of virtual or network communication. However, there are some definitions from the following experts: Firstly, Kotler and Keller: Social media is the media used by consumers to share text, images, sound, and informational videos with others. 12 Secondly, social media (Facebook, Twitter, Youtube and Flickr) is a historical necessity that has brought changes in the process of human communication. The communication process, which has been carried out only through face-to-face communication, group communication, mass communication, has changed completely with the development of communication technology today, especially the internet. These changes will bring consequences to the communication process. The communication process that occurs has consequences at the individual, organizational and institutional levels. 13 Thirdly, Taprial and Kanwar: Social media is the media used by someone to be social, or get online social by sharing content, news, photos and others with other people. 14 Fourth, Kaplan and Haenlein: Social media is a group of internet-based applications that build on the ideological foundations of Web 2.0, and that allow the creation and exchange of User Generated Content. From this definition, Kaplan and Haenlein state that social media is a group of internetbased applications built on the ideological foundations of Web 2.0 which is a platform for the evolution of social media that allows the creation and exchange of User Generated Content. 15 From the three definitions of social media above, it can be concluded that social media is a vehicle for socializing oneself in the form of sharing text, video images. In this way people socialize themselves in a virtual community connected with the help of the internet. a. Characteristics of Social Media According to Taprial and Kanwar, social media has several characteristics as follows: Firstly, accessibility. Social media can be accessed easily by anyone who has a device connected to the internet. Therefore, social media is very easy to use by anyone and does not require special skills for it. Anyone with online access can use social media to communicate with others around the world. Secondly, interactivity. Communication through social media takes place in two ways or even more. Hence, social media users can interact with other social media users. Everyone can ask questions, discuss a product or other things that match their interests. Thirdly, longevity / volatility. Sent messages can be stored and accessed again for long periods of time. Even these messages can be edited and updated again at any time as needed. Fourth, affordability (Reach). Internet offers unrestricted access to all content contained in the invisible world. Everyone can access the internet from anywhere and anytime. Fifth, speed. Messages that have been created on social media can be accessed by everyone in the same network or group or forum or community as soon as the message is published. We can communicate with audiences without going through many obstacles that affect the delivery of a message. The response or responses given by the audience are also instant or immediate so that we can dialogue with the audience in real time. 16 It can be said that the characteristics possessed by social media are the main strengths or advantages of social media. This allows everyone to connect with other people and access information available on the internet. Interactions that are carried out online make no more barriers between social media users. Social media is generally used to keep in touch with friends or family, meet people who have the same interests, discuss issues, share opinions, give and answer questions, read reviews and so on. Humans use social media as a means of learning communication to increase knowledge and make the best decisions. Social media is also used in the world of business, politics, entertainment and others to target potential consumers and target consumers, interact with consumers, build or shape a company image and manage the company's reputation online. b. Social Media Functions To understand the function of social media, Kietzmann et. al. stated that the function of social media can be explained by using the Honeycomb framework which describes social media using seven pillars, namely identity, conversation, sharing, presence, relationship, reputation, and group. The descriptions of the seven pillars are as follows: Firstly, identity: how the user presents himself. Secondly, conversations: how users communicate with other users. Thirdly, sharing: how users exchange content, distribute content, and receive content. Fourth, presence: how users know the presence of other users. Fifth, relationship: how users relate to each other. Sixth, reputation: how users know the content and social position of other users. Seventh, groups: how users are in a community or group. 17 The seven functions above really represent social media as a vehicle for self-socialization even though they don't actually meet physically. In this day and age, self-interaction is only measured and represented by social media, which makes each other virtually connected. Social media deconstructs traditional social relationship patterns that require physical encounters. In the era of social media, humans are connected to each other by means of social media. Personally, social media provides space for each individual to connect with many other human characters who socialize through social media. As for companies, social media is an effective showroom for the products and services they produce. In general, there are many benefits that can be gained from the existence of social media in modern human life today. The spread of hoax information in society is influenced by the massive use of social media in society. Social media is a unique marker for the millennial generation. This generation feels that their life is not perfect if they do not have connections to social media. Social media itself is an information platform that is open and freely accessible to anyone. That's why a variety of information can be found on social media including hoax information. The era of the internet, which was followed by the presence of the latest communication technology media and social media in society, gave birth to a community context that was flooded with information. 18 The presence of the internet has an influence on communication activities. The internet has taken part in human life both positively and negatively. After connecting to the internet everyone can enjoy the positive impact of the internet. Among them is the availability of a lot of information both text, voice and images that can be accessed anytime and anywhere. The existence of the internet also makes it easier for humans to interact with other people without feeling obstructed by distance. According to Graham, interaction or interactivity is a way that runs between users or machines (technology) by enabling users and devices to connect interactively. Interaction is one of the characteristics of cyber media as a communication tool. 19 Through cyber media, every human being can be interactively connected to each other at the same time. Even the use of cyber media can represent the involvement of communication patterns, which at first could only communicate directly or face to face. It is in this context that information is easily obtained from various sources. In the past, information only came from limited sources and was considered authoritative as news sources, then in this era of information flood, information came from anywhere and from anyone without any clarity as to whether the source of information had the authority to disseminate information. In this situation, people absorb information from social media without the ability to filter the accuracy of the information. They then become entangled in information sharing activities and sometimes the information shared is a hoax. C. Methods This research was conducted with a library research approach. The main research data is in the form of relevant library sources. Data in the form of library analysis, statistical data, related information sources, are obtained from books, journals, and releases from parties related to social media information. The data obtained were analyzed using qualitative analysis. The data analysis process was carried out by classifying and organizing the data found for later data analysis and determining research conclusions. The Reality of Hoax Information on Social Media Social media is a phenomenon of the times. Currently, almost all levels of society are connected to social media. Sarwoto Atmosutarno in the Social Media Optimization Guide book for the Ministry of Trade of the Republic of Indonesia, said that until 2014 the number of internet users in Indonesia reached 70 million or 28% of the total population. Social media users like Facebook make up around 50 million or 20% of the total population, while Twitter users make up 40 million or 16% of the total population. The figures above will continue to grow from year to year, because they are supported by a large base of mobile / cellular phone and internet users. For this reason, it can be concluded that almost or even more than a third of the total population of Indonesian people are now internet literate. 20 This figure has increased based on data in January 2020. In terms of cellphone use, there are 338.2 million active cellphones in Indonesia, or the equivalent of 124 percent of the population. There are 174.5 million active internet users, equivalent to 64 percent of the population. There are 160 million social media users in Indonesia. This figure is equivalent to 59 percent of the population. The average time for internet usage per day is 7 hours 59 minutes, and the average time for using social media is 3 hours 26 minutes. 21 Vol. 2 No. 1, 2020 w-2 nd ICONDAC -November 3-5, 2020 e-ISSN: 2686-6048 On a global scale, according to the latest data in January 2018, the number of internet users is 4.021 billion people, or the equivalent of 53 percent of the 7,593 billion total world population. There are 3.196 billion active social media users and 5.135 billion cellphone users or 68 percent of the population. 22 In January 2020, this figure underwent a significant change. When the world's population reached 7.75 billion people, the number of people using the internet grew to 4.54 billion or the equivalent of 59 percent of the population. Worldwide, there are 3.80 billion social media users or the equivalent of 49 percent of the population. Cell phone users reach 5.19 billion, equivalent to 67 percent of the world's population. 23 The data above shows that the majority of the population, both on a world scale and in Indonesia, are the majority of active internet users and connected to social media. They are the ones who always absorb information from social media. Various information is spread through social media. It can be ascertained that all types of information have a consumer base who is always looking for the information they like via the internet or social media. One of the many information circulating on social media is hoax information. A survey conducted by Ismail Fahmi (Drone Emprit) shows that 92, 40 percent of hoaxes in Indonesia are spread through social media. 24 Hoax is indeed an information phenomenon that can easily be found on social media in Indonesia. This hoax message was spread in chains on social media without knowing where this information came from. And without ever having tested the validity of this information. In general, hoax information on social media can be related to two aspects, namely politics and religion. a. Hoax on Political Aspect The political agenda becomes fertile ground for the development of hoax information on social media. Hoax is seen as a powerful weapon to destroy the reputation of political opponents. Information containing hoaxes attacked certain political participants without clear sources. And the party attacked by hoaxes is busy making clarifications about the hoax issue that is blowing. Even though it has been clarified, usually not everyone can access the clarification and tends to believe the hoax issue that exists. And this is very detrimental to those affected by hoax issues In Indonesia, there are at least two political moments where the hoax issue dominates the social media space of society. The two events were the 2017 DKI Jakarta Regional Head Election 25 and the 2019 Presidential Election of the Republic of Indonesia. At the moment of the 2017 DKI Jakarta Regional Election, there were more than 1900 reports of alleged hoaxes that occurred ahead of the regional elections and there were more than a thousand reports confirmed that they were hoaxes, mostly about politics, related to the Jakarta Pilkada, and religious issues played a major role. 26 In the 2017 DKI Jakarta Regional Election, the hoax issue does not only play in the political realm but also religious and ethnic issues. Jargon about the governor of Muslim-governor Non-Muslim, Chinese and Indigenous, about the personality of the candidate, as well as many other hoax issues that are easily found milling about on social media. In the 2019 election for the President of the Republic of Indonesia, the hoax issue also circulated in this political moment. As with the DKI Jakarta Regional Election in 2017, at the moment of the 2019 Presidential Election of the Republic of Indonesia, the hoax issue which is a combination of political and religious issues can be easily found on social media. platforms. Issues circulating include the issue of the PKI, China, the religion of the presidential candidates. In addition, there are many hoaxes related to the election process, for example relating to the neutrality of the KPU, about the results of different quick counts or real counts, about winning claims from candidates, and attempts to question the validity of the election results. The electoral process is continuously peppered with hoax information, both before its implementation and after it is implemented. The two political events above provide an overview of how hoax information is used for political purposes. Incidents like this have been repeated in various political moments in Indonesia. The two examples above have become phenomenal because they have caught the attention of all Indonesians and even the international community. Currently every government political decision will always be followed by misleading hoax information. For example, hoax information following the ratification of the Omnibus Law by the DPR. Lots of hoaxes circulating about this matter, and mainly attacking the DPR and the Government. This phenomenon is actually a continuation of the same phenomenon during political moments as previously described. b. Hoax on the Religious Aspect Apart from dealing with political issues, one of the themes that often fills hoax content on social media is religion. Even the issue of political hoaxes is often filled with religious issues. This shows that religious issues are seen as having a strong influence in influencing many people so they are often used as hoax content on social media. Hoax information related to religion, especially in Indonesia, has actually had a lot of very damaging consequences, both for the structure of community life, as well as the order of life as a nation and state. The religious-nuanced conflict that occurred in Ambon City, Maluku in 1999, illustrates the devastating effects of hoax information. Even conflicts with religious nuances that subsequently occurred in various regions in Indonesia such as in Poso, Central Sulawesi were also caused by hoax issues circulating in society. The circulation of hoax issues with religious nuances in Indonesian society today is also heavily influenced by the presence of new religious ideas in Indonesia which, if traced, have links to global religious understanding. In this condition, it seems as if there is intense competition between religious beliefs or schools. This competition also seems to position local Indonesian religious ideas such as NU and Muhammadiyah, with new religious understandings such as Wahabi and Salafi. There are many local religious traditions that have existed for a long time in the religious practice of the Indonesian people, which are then misled or blamed. This is a picture of the existence of rivalry between religious understandings in Indonesia. These attacks on religious practices are usually indirect and formal, but through social media by presenting memes that attack certain parties. Sometimes even twisted the facts. For example, by quoting the opinion of NU figures to recite the birthday of the Prophet Muhammad. In addition to the examples above, there are many other hoaxes related to religion that were created with the aim of inciting the emotions of other Muslims. Hoaxes like this are, for example, in the form of a caption about the massacre of Muslims in certain locations. Or also hoaxes about the halalness of certain food products or even certain places to eat. Hoaxes about the Day of Judgment, and many more hoaxes associated with religion. Religion is indeed a hoax tool that always appears on social media. This is because the religious aspect is able to attract the attention of many people. Religion can also be a trigger for solidarity and fanaticism in the name of faith. Therefore, hoax information with religious ingredients in it always comes to the surface because it is seen as effective in achieving the hoax's goal itself. Vol. 2 No. 1, 2020 w-2 nd ICONDAC -November 3-5, 2020 e-ISSN: 2686-6048 Threat of Hoax Information against Religious Moderation Hoax information on social media has become a problem that is not only around the information itself but also concerns problems in almost all aspects of human life. This is because information is an inseparable part of modern human life, so that information is an aspect that also colors the life of modern society today. One aspect of Indonesian society exposed to this hoax information problem is moderate religious life. Indonesia has long been known for its diversity of religions, ethnicities, languages, cultures, customs and other forms of diversity and this diversity is almost never a problem. Diversity such as cultural diversity, family background, religion and ethnicity interact with each other in the Indonesian community. 27 In the religious field, in Indonesia there are at least five major religions, namely Islam, Protestant Christianity, Catholicism, Hinduism and Buddhism. This religious diversity has never been a problem. Relations between religions work well because of the moderate views held by all religious adherents. Religious moderation is indeed the key word for the harmony of religious relations that is built in Indonesia. According to Lukman Hakim Saifuddin, religious moderation means referring to the attitude of reducing violence, or avoiding extremes in religious practice. Religious moderation must be understood as a balanced religious attitude between the practice of one's own religion (exclusive) and respect for the religious practices of others with different beliefs (inclusive). This balance or middle ground in religious practice will undoubtedly prevent us from excessive extremes, fanaticism, and revolutionary attitudes in religion. Religious moderation is a solution to the presence of two extremes in religion, the ultra-conservative pole or the extreme right on the one hand and the liberal or the extreme left on the other. 28 Islam itself as a religion emphasizes its teachings on moderate religious aspects. Islam from its semantic aspect as a religion of safety and peace has shown the essence of moderate Islam. And this can be seen from the long history of Islam in Indonesia. Moderation is the core teaching of Islam. Moderate Islam is a religious understanding that is very relevant in the context of diversity in all aspects, including religion, customs, ethnicity and the nation itself. 29 Islam itself as a religion emphasizes its teachings on moderate religious aspects. Islam from its semantic aspect as a religion of safety and peace has shown the essence of moderate Islam. And this can be seen from the long history of Islam in Indonesia. Moderation is the core teaching of Islam. Moderate Islam is a religious understanding that is very relevant in the context of diversity in all aspects, including religion, customs, ethnicity and the nation itself. Hoax threats on social media against the moderation of Islam in Indonesia include: First, the threat of radicalism. Radicalism is a real threat to the moderation of Islam in Indonesia. Radicalism is a notion that wants to make drastic changes by using violence. This radicalism will be even more dangerous if it is attached to ideological aspects such as religion. Meanwhile on social media, this hoax information has a lot to do with this ideological aspect. Religion is a very popular selling point in the production of this hoax. Hoaxes with religious content on social media usually contain hate speech, attacks on the intended parties, and sometimes followed by radical actions such as terrorism. Second, the threat of divide of Muslims. Another danger contained in hoax information on social media is the divide of Muslims. Without realizing it, hoax information on social media has the tendency to pit Muslims against each other. This practice of fighting against one another is primarily aimed at religious practices carried out by religious schools that exist in Indonesia. Hoax information like this usually aims to foster mutual hatred among Muslims. lead to acts of blasphemy against each other and sometimes even leads to physical contact between supporters of religious sects. 30 Third, the threat of intolerance. Diversity in religion, ethnicity and culture as well as other elements of diversity is a reality that cannot be avoided in Indonesia, a very pluralistic country from many aspects. Even so, efforts to manage this diversity so that it will always be the collective strength of the Indonesian nation is under threat from continuous hoax information on social media. There is a lot of misleading information on social media that tends to reject diversity and encourage intolerance, especially in matters of religion. Religion seems to be a wall that separates Indonesian society into separate ideological boxes, something that is contrary to the noble values of Islam itself. The analysis above shows how hoax information on social media threatens the moderation of Islam in Indonesia. Islam is one of the values that shapes the character of the Indonesian nation as a polite, tolerant and peace-loving nation, but with the spread of hoax information spreading on social media lately it seems that Indonesian Muslims have lost their identity as a peace-loving people. In the end, hoaxes can no longer only be seen as chain messages without meaning on social media. More than that, hoax information must be seen as a real threat to the life of the Indonesian people. The public must be made aware of hoax information on social media and its dangers for the harmony of people's lives. Massive literacy efforts are needed so that people can become healthy social media users. With this literacy, people are not trapped as producers, consumers, and carriers of hoax information on social media. E. Conclusion Hoaxes that were originally just jokes have evolved into perverted and dangerous information and have become massive especially in the era of social media. Social media, which was created to support the supply of information for its users, has become a vehicle for hoaxes to circulate. Hoax information on social media is a real threat to the moderation of Islam in Indonesia. This information, which is generally related to politics and religion, threatens the moderation of Islam in Indonesia through three forms, namely radicalism, division and intolerance. These three forms of threat erase the image of Indonesian Muslims as a polite, tolerant and peace-loving nation. Literacy efforts are needed in the use of social media in Indonesian society so that they can become wise social media users in receiving hoax information.
2020-12-27T10:09:17.396Z
2020-11-26T00:00:00.000
{ "year": 2020, "sha1": "8d80e23270fce1e35a46514897f5d37505b28c33", "oa_license": "CCBY", "oa_url": "http://proceedings.uinsby.ac.id/index.php/ICONDAC/article/download/386/418", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "600a8222c903314ac15ecc1928c06cc23b04e345", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Political Science" ] }
242240288
pes2o/s2orc
v3-fos-license
The impaction of time from injury to surgery in functional recovery of traumatic acute subdural hematoma Background: The time from injury to surgery (TIS) is critical in the functional recovery of individuals with traumatic acute subdural hematoma (TASDH). However, only few studies have confirmed such notion. Methods: The data of TASDH patients who were surgically treated in Chia-Yi Christian Hospital between January 2008 and December 2015 were collected. The significance of variables, including age, sex, traumatic mechanism, coma scale, midline shift on brain computed tomography (CT) scan, and TIS, in functional recovery was assessed using the student’s t-test, chi-square test, univariate and multivariate models, and receiver operating characteristic (ROC) curve. Results: A total of 37 patients achieved functional recovery (outcome scale score of 4 or 5) and 33 patients had poor recovery (outcome scale score of 1–3) after at least 1 year of follow-up. No significant difference was observed in terms of age, sex, coma scale score, traumatic mechanism, or midline shift on brain CT scan between the functional and poor recovery groups. TIS was found to be significantly shorter in the functional recovery group than in the poor recovery group (145.5±27.0 vs. 181.9±54.5 minutes, P-value=0.001). TIS was a significant factor for functional outcomes in the univariate and multivariate regression models. The analysis of TIS with the ROC curve between these two groups showed that the threshold time for functional recovery in comatose patients and those with TASDH who were surgically treated was 2 hours and 57.5 minutes. Conclusions: TIS is an important factor l for the functional recovery of comatose TASDH patients who underwent surgery. multivariate models, and receiver operating characteristic (ROC) curve. Results: A total of 37 patients achieved functional recovery (outcome scale score of 4 or 5) and 33 patients had poor recovery (outcome scale score of 1-3) after at least 1 year of follow-up. No significant difference was observed in terms of age, sex, coma scale score, traumatic mechanism, or midline shift on brain CT scan between the functional and poor recovery groups. TIS was found to be significantly shorter in the functional recovery group than in the poor recovery group (145.5±27.0 vs. 181.9±54.5 minutes, P-value=0.001). TIS was a significant factor for functional outcomes in the univariate and multivariate regression models. The analysis of TIS with the ROC curve between these two groups showed that the threshold time for functional recovery in comatose patients and those with TASDH who were surgically treated was 2 hours and 57.5 minutes. Conclusions: TIS is an important factor l for the functional recovery of comatose TASDH patients who underwent surgery. Background Traumatic acute subdural hematoma (TASDH) is one of the most devastating types of traumatic brain injury (TBI), with a mortality rate ranging from 30% to 70% [1][2][3][4]. An emergent operation is considered if a patient is in coma or meets the surgical indication for TASDH. In 1981, Seelig et al. (1981) have reported that the mortality rate of TASDH can be reduced from 90% to 30% if the subdural hematoma was removed within 4 hours after injury [5]. Although few reports have shown similar findings [6,7], a number of subsequent studies have failed to identify the effect of time to surgery on mortality rate [3,[8][9][10][11][12][13][14]. In fact, some studies have reported a significant association between faster time to surgery and higher mortality rate [15,16]. Thus, we evaluated the data of 3 TASDH patients who were surgically treated from 2008 to 2015 in Chia-Yi Christian Hospital in Taiwan. In this study, the effect of time from injury to surgery (TIS) on the outcomes of TASDH patients who were in coma from the time of trauma and who did not regain consciousness before surgical intervention was examined. Methods This study (CYCH-IRB 106074) was conducted after obtaining approval from the ethics committee of Chia-Yi Christian Hospital and has been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. Patients with closed-head injury who had acute subdural hematoma on brain CT scan and who underwent craniotomy or craniectomy for the removal of hematoma were included in our study. However, patients with epidural hematoma, penetrating head injury, or intraparenchymal hemorrhage were excluded. Patients with TASDH and concomitant intraparenchymal hemorrhage that did not require evacuation were included. Between January 2008 and December 2015, a total of 235 patients from the Neurosurgical Department of Chia-Yi Christian Hospital in Taiwan met the criteria for TASDH. Patients who had thoracic, abdominal, or pelvic injury (n=10) or those who did not have any record of time of injury (n=6) were excluded. Based on our exclusion criteria, patients with a coma scale score >8 (n=114 patients, coma scale score of 9-15) and those aged >70 years (n=19) were not included. Furthermore, 16 patients with a coma scale score of 3 or 4 who were combined with bilateral pupil dilation were excluded, of which 13 died and three were in vegetative state. A total of 70 patients met the criteria and were included for further analysis. In this study, surgical indication and treatment of acute subdural hematoma were in accordance with the guidelines on Surgical Management of Acute Subdural Hematoma [1]. In patients who were included in the study, the following data were extracted from the medical database of our hospital: age, sex, trauma mechanism, coma scale score, pupil size, and light reflex, midline shift on brain CT scan, whether craniotomy or craniectomy and evacuation of acute subdural hematoma were performed, postoperative intracranial pressure (ICP) in the surgical intensive care unit (mean data obtained during the second day after operation), information about postoperative complications or reoperation, time of injury notification (according to ambulance station record in 56 patients or 4 witness report in 14 patients), arrival time at the emergency room of our hospital, time of surgery initiation, and surgical outcomes. TIS was defined as the documented time of injury notification to the initiation of surgery. The surgical outcomes were assessed using the Glasgow outcome scale (GOS) at least 1 year after the injury. In most cases, the outcomes were recorded during follow-up visit to the neurosurgeon. In a few cases, the outcomes were recorded via phone call by neurosurgical staff. Functional recovery was defined as a GOS score of 4 or 5. Severe neurological deficits, vegetative state, and death were considered poor outcomes. Statistical analysis The student's t-test or chi-square test was used for the comparison of variables in the functional and poor recovery groups (Table 1). In particular, student's t-test was utilized to evaluate continuous or numeric variables (such as age, coma scale, and midline shift on brain CT scan), and the chi-square test was used to assess non-numeric variables (including sex, traumatic mechanism, and type of operation). In addition, univariate and multivariate logistic regression models were used to analyze the impact of each variable in outcomes ( Table 2). The preoperation systemic disease and postoperation comorbidities were presented in Table 3. All statistical analyses were performed using the Statistical Package for the Social Sciences software for Windows version 21.0 (IBM Corp., Armonk, NY, USA). Age and coma scale were classified into three groups for analysis (Tables 1 and 2). P-values <0.05 were considered statistically significant. Finally, the receiver operating characteristic curve was used for the analysis of TIS in the functional and poor recovery groups (Fig. 1). Results Of the 235 TASDH patients who were surgically treated, 70 were included in our study. The demographic data of these patients are presented in Table 1. In total, 37 patients achieved functional recovery (n=11, GOS score of 5; n=26, GOS score of 4) and 33 patients had poor recovery (n=15, severe neurological deficit; n=9, vegetative state; n=9, death). The mean age and standard deviation of the whole group was 50.9±14.6 (range: 16-70) years. Among the patients, 49 (70.0%) were men and 21 (30%) women. The mean coma scale score was 5.9±1.1, the mean midline shift on brain CT scan was 10.0±5.2, and the mean TIS was 162.5±45.6 minutes (Table 1). Difference in each variable between the functional and poor recovery groups Age, sex, coma scale, trauma mechanism, pupil size and light reflex, type of operation (craniectomy or craniotomy), and midline shift on brain CT scan did not significantly differ between the two recovery groups ( Table 1). The postoperative ICP was significantly lower in the functional recovery group than in the poor recovery group (p=0.003, t-test). The TIS was 145.5±27.0 minutes in the functional recovery group and 181.9±54.5 minutes in the poor recovery group. The student's t-test revealed that TIS is the most significant variable in distinguishing the two recovery groups (p=0.001, Table 1). Analysis of variables in the univariate and multivariate logistic regression models Each variable (age, sex, coma scale, pupil size and light reflex, traumatic mechanism, type of operation and TIS) was analyzed using the univariate and multivariate logistic regression models. The results revealed that TIS was a significant factor for functional outcomes in both regression models. Age, sex, coma scale, pupil size, type of operation, and traumatic mechanism were not significantly associated with functional outcomes ( Table 2). Significance of TIS The TIS was analyzed with the ROC curve, and the results are shown in Fig. 1. The threshold time for functional recovery was 2 hours and 57.5 minutes. The specificity and sensitivity were 0.919 and 0.515, respectively. This result indicated that the probability of functional recovery in a comatose TASDH patient who undergoes surgery within less than 2 hours and 57.5 minutes was 51.5% and that of a patient who undergoes surgery after the threshold time was 8.1% (100% − 91.9%). The area under the curve was 0.713, which indicated the credibility of the ROC curve, and the P-value was 0.002. The comorbidities between the functional and poor recovery groups Only cardiovascular disease, diabetes mellitus, or liver cirrhosis was found in preoperation systemic disease and presented in Table 3. No patients took coumadin or new oral anticoagulants before 6 craniotomy and removal of subdural hematoma. Two patients in functional group took 100 mg aspirin because of coronary artery disease but they did not have postoperation intracranial hemorrhage. Postoperation infection, seizure, or reoperation for intracranial hematoma was also presented in Table 3. These data had no significant difference between functional and poor recovery group. Discussion The present study aimed to determine whether TIS affects the degree of functional recovery in TASDH patients who required emergency craniotomy and removal of acute subdural hematoma. Various statistics were used in this study. Between the functional and poor recovery groups, only TIS showed a significant difference (Table 1). When a univariate and multivariate logistic regression model was applied, TIS had a significant effect on functional recovery ( Table 2). TIS was further analyzed with the ROC curve. The threshold time for functional recovery was 2 hours and 57.5 minutes. The specificity and sensitivity were 0.919 and 0.515, respectively. This result indicated that the probability of functional recovery in a comatose TASDH patient who undergoes surgery within less than 2 hours and 57.5 minutes was 51.5% and that of a patient who undergoes surgery after the threshold time was 8.1% (100%−91.9%). TIS is a significant factor influencing the functional outcome in our study. In previous studies, such as that of Dent et al. (1995), counter-intuitive results have shown that a shorter TIS is correlated to poor functional recovery [15,16]. However, such event is attributed to the existence of significant selection bias because patients with more severe injuries are more likely to undergo earlier surgery. This selection bias will certainly skew the results. Dent et al. (1995) have shown that patients who had surgery within 4 hours were more likely to have a lower Glasgow coma scale score, more severe intracranial injuries, and greater incidence of brain herniation than those who had surgery after 4 hours. To prevent a similar bias, only patients who had a coma scale score of 4-8, those younger than 70 years, and those who did not have additional structural brain injury other than TASDH were included. Moreover, patients with torso injuries were excluded as such conditions are commonly accompanied with hypotension and additional systemic complication. A multivariate logistic regression analysis that includes multiple variables will significantly reduce the likelihood of selection bias. In the hypothesis of Mathai et al. (2010), the onset of life-threatening brain swelling in patients with severe TBI occurs between 2-3 hours after the injury and may be attributed to the osmotic load exerted by the breakdown of debris in the membrane and cytoplasmic structures [17]. In the report of Haselsberger et al. (1988), the surgical outcomes of TASDH patients were influenced by preoperative consciousness status [6]. When the time interval between onset of coma and surgical decompression exceeded 2 hours, the mortality rate increases from 47% to 80%. Meanwhile, Seelig et al. (1981) have reported an increase in mortality rate from 30% to 90% if TIS exceeds 4 hours [5]. The current study focused on the functional outcomes of TASDH patients who were in coma and required emergency surgical operation. Our statistical analyses revealed that TIS was a significant factor and that the threshold time for surgery on TASDH patients must be assessed to achieve functional recovery. With regard to the factors influencing outcomes, the impact of age and coma scale on functional recovery have been studied most frequently in the past [18][19][20]. Our data did not show that younger patients or those with a higher coma scale score were more likely to obtain better outcomes (Table 1), and this result may be attributed to two reasons. First, only 70 sets of data were included in our study, which may be considered a small sample size. Second, based on our exclusion criteria, 19 patients who were older than 70 years (n=8, severe neurological deficit; n=7, vegetative state; n=4, dead) and 16 patients with a coma scale score of 3 or 4 combined with bilateral pupil dilatation (n=3, vegetative state; n=13, dead) were not included. These reasons reduced the impact of age and coma scale on outcomes in this study.. As with all studies, the present study had some limitations. It had a small sample size and was conducted at a single center. Some exclusion criteria were also applied to age and coma scale score. Nevertheless, this study can be helpful in understanding the importance of TIS in patients with TBI and can provide valuable contributions in future-related studies. The time lapse from injury was considered a critical factor based on the study of Seelig et al. (1981) in 1981. However, several authors have obtained different conclusions [3,9,13,14,16,17]. Our study included TASDH patients who were surgically treated from 2008 to 2015. With the use of the exclusion criteria, we believe that our sample is reasonable and that some obvious selection biases 8 were eliminated. Thus, TIS is an important factor for the functional recovery of TASDH patients. For Declaration of Helsinki and its later amendments. For this is a retrospective study, informed consent is not required. Details that might disclose the identity of the subjects under study are omitted. Consent for publication: I give my consent for information about my manuscript to be published in BMC neurology. Availability of data and materials: The materials described in my manuscript will be freely available to any scientist who wish to use them for non-commercial purpose. It has been presented in attached file.
2020-03-19T10:24:10.013Z
2020-03-17T00:00:00.000
{ "year": 2020, "sha1": "a6acc298fa566d60206131113da0c9790622e9aa", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-9656/v2.pdf?c=1585619790000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "2e58b8b84ca01d452c9f8e46fa0f113035bd3b17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
259883109
pes2o/s2orc
v3-fos-license
A power-sharing perspective on employees’ participatory influence over organizational interventions: conceptual explorations A participatory approach is widely recommended for organizational interventions aiming to improve employee well-being. Employees’ participatory influence over organizational interventions implies that managers share power over decisions concerning the design and/or implementation of those interventions. However, a power-sharing perspective is generally missing in organizational intervention literature. The aim of this paper is therefore broaden the picture of the mechanisms that influence, more or less, participatory processes by conceptually exploring this missing part to the puzzle. These conceptual explorations departs from both an empowerment and a contingency perspective and results in six propositions on what to consider in terms of power-sharing strategies, reach, amount, scope, culture and capacity. Implications for research, as well as for organizations and practitioners interested in occupational health improvements, are then discussed. Especially, the importance of aligning power-sharing forms with the needs of the participating employees, and taking factors that can facilitate or hinder the power-sharing process into consideration, are stressed. The importance of training managers in power-sharing practices and supporting a participatory process is also highlighted. Introduction Organizational interventions focus on change in how work is organized, designed, and managed to improve the well-being of employees (Nielsen et al., 2010). By targeting improvements in factors that contribute to the work environment, organizational interventions have the potential to benefit many at the same time, over time. Therefore, they are generally recommended for improving employee well-being (e.g., Kelloway and Day, 2005;Nielsen and Noblet, 2018). Organizational interventions often address problems emerging from employees' concerns about their work environments (Tvedt and Saksvik, 2012). Consequently, employee participation in the development of efficient and effective solutions to identified problems is a natural next step. For example, in a recent conceptual paper aiming to identify important principles on what to consider when designing, implementing, and evaluating organizational interventions, employee participation was the first principle (von Thiele Schwarz et al., 2021). Thus, systematic approaches to solving identified work-environment problems unanimously highlight employee participation (Nielsen and Abildgaard, 2013;Fridrich et al., 2015;von Thiele Schwarz et al., 2016). Beyond improving the identification of problems and solutions, employee participation can also help contextualize intervention activities to improve their fit into ongoing organizational operations. It can also help align these activities with the needs of those involved (von Thiele Schwarz et al., 2016;Lundmark et al., 2021). Employee participation has even been suggested as a possible intervention in itself because the empowerment experience that comes with active participation contributes to improved employee well-being (Theorell, 2003). However, although participation is widely recommended, there is little guidance on what it really means in terms of whom should participate, to what extent, and how it can be achieved. The aim of this paper is to start filling this gap by focusing on how different levels of employee participation can emerge from different forms of power sharing. The aim is also to elaborate on five factors known to influence the power sharing-participation process: Reach (i.e., who are participating?; Lehmann et al., 2022), Amount (i.e., how much power is shared?; Lee et al., 2017), Scope (i.e., what kind of decisions are shared?; Richardson et al., 2021), Culture (i.e., where is the intervention taking place?; Tvedt and Saksvik, 2012), and Capacity (i.e., what are the prerequisites?; Coffeng et al., 2021). Thereby, starting to fill the gap on what organizations should consider when involving employees in the design and implementation of organizational interventions. Hence, it concentrates on the control over decisions part of the participation process, explicating the ways organizations may foster participation during interventions, and what they can expect from doing so. Thereby, it adds a missing part to the puzzle to the understanding of the power sharing-participation process. As of now, only advocating participation without proposing how this can be achieved (i.e., in terms of power sharing) increases the risk of making participation in organizational interventions nothing but a fancy phrase. In addition, participation without the appropriate mandate to influence decisions may lead to outcomes adverse to what is desired, such as resistance to change instead of empowerment. By highlighting the roles of potential approaches and boundary conditions, guidance for organizations on what to consider is provided. The paper is structured so that it first describes two commonly used points of departure for examining power sharing: empowerment theory and contingency theory. Thereafter, different forms of power sharing are clarified, as is what can be expected from them in terms of employee participation during organizational interventions. Current knowledge on the influence of different conditions is depicted, and from that, propositions are made to introduce and guide the understanding of employees' participatory influence over organizational interventions from a power-sharing perspective. Finally, implications for research and practice are discussed. Two points of departure: empowerment and contingency theories Empowerment and contingency theories are widely applied in literature to explicate the power-sharing-participation relationship and its outcomes (Cheong et al., 2019). Although representing different perspectives, they can both contribute to the understanding of power sharing as a way to enable employee participation during organizational interventions. Psychological empowerment (Spreitzer, 1995) is a motivational state comprised of meaning (i.e., alignment between one's own ideals and the requirements of one's work role), competence (i.e., belief in one's capability to perform well within a work role), self-determination (i.e., autonomy in the performance of one's work role), and impact (i.e., possibilities to influence outcomes at work). Psychological empowerment has been positively associated with an extensive number of desirable employee and team well-being and performance outcomes and found to be a way of promoting democracy at work (Seibert et al., 2011). These cognitions echo an active participatory orientation to decision-making in which employees are interested in and able to form their work roles and influence the context of those roles. Hence, a participatory approach to decision-making is an important antecedent to employee empowerment (on both the individual and group levels) that, in turn, can be seen as a mechanism for producing beneficial employee and team outcomes (Seibert et al., 2011). Psychological empowerment has also been found to mediate the relationship between job crafting (i.e., alterations to the job made through employee initiative; Tims et al., 2016) and employee outcomes such as job performance (Maden-Eyiusta and Alten, 2021). Thus, from an empowerment perspective, employees' active participation in crafting organizational interventions can be seen as beneficial in itself. That is, rather than a means to an end, high levels of employee participation in the design and implementation of organizational interventions are part of the end. However, to achieve a sufficient level of such participation practices, a certain degree of power sharing is necessary (Abildgaard et al., 2020). Adding to this perspective is that with more power over decisions, employees have greater chances of controlling which activities to implement, what tasks to perform, and how to perform these tasks based on their competences and needs (Biron and Bamberger, 2011). It has also been suggested that allowing employees to act within their competencies enhances their sense of control. In turn, this can buffer the detrimental effects of increased demands on well-being and performance (Van Yperen and Hagedoorn, 2003). Based on these arguments, creating a fit between employees' competences and needs and an organizational intervention is often highlighted (Lundmark et al., 2021). To create such an intervention fit, employees need to be active participants rather than passive recipients in the process of creating and implementing organizational interventions (von Thiele Schwarz et al., 2016). A contingent perspective instead emphasizes that the effectiveness of different forms of power sharing is dependent on specific situational factors (Vroom and Jago, 2007;Oc, 2018). In line with this contextual focus, the conditions under which managers and employees interact and under which organizational interventions take place are often stressed (e.g., Lundmark et al., 2020). Taking a contingent perspective on organizational interventions involves asking questions such as 1. Where is this taking place (e.g., in terms of country and culture)? 2. Who is involved (e.g., composition of those involved)? 3. When (e.g., during turbulent times)? It also involves appraising aspects of the work at hand in terms of job characteristics related to the task and any social, physical, or temporal issues (Oc, 2018). Specific aspects can also combine to Frontiers in Psychology 03 frontiersin.org produce outcomes; for example, time spent on discussions in teams may be related to team climate, and team climate may, in turn, influence time spent on discussions. Furthermore, these aspects are seen both as a potential antecedent to the power-sharing process (e.g., determining what form of power sharing is possible) and as a moderator in the process (e.g., determining the effect of different power-sharing forms; Richardson et al., 2021). From this viewpoint, a form of power sharing (e.g., shared decision-making) that is effective in one situation may prove totally ineffective in a different situation (Schweiger and Leana, 1986;Vroom and Jago, 2007). Hence, managers' power sharing with employees should be adapted to fit the circumstances of each specific situation. Although participation here can be understood as a means rather than an end, Vroom (2003) clearly stated that apart from decision effectiveness, employee development should be considered when choosing the power-sharing-participation strategy. Thus, if employee development is of the essence (e.g., viewed as a goal), then this can help determine employees' levels of participation in decisionmaking processes. In the following sections, power sharing during organizational interventions, its outcomes, and five potentially influential aspects that influence the power-sharing-participation-outcome process (see Figure 1) are deliberated. The aspects considered in this paper are not meant to be exclusive or exhaustive but rather a starting point for further explorations. Power sharing strategies As Abildgaard et al. (2020) pointed out, there is a difference between participating in intervention activities and having participatory influence over decisions on what kind of intervention activities are suitable. However, whether employees participate only marginally by taking part in intervention activities or exert a participatory influence over interventions is, in turn, ultimately a question of power sharing in various degrees (Hollander and Offermann, 1990). In other words, employee participation is directly dependent upon managers' sharing of power in some form, and the degree of power sharing will accordingly affect the level of employee participation. Power sharing as a way to enable participation can be seen as a continuum reflecting the amount of power being shared (Vroom, 2003;Biron and Bamberger, 2011). At one end is autocratic decision-making, where employees have no influence over decisions. At the other end is delegation, where employees are allowed to make decisions on their own (i.e., power is distributed rather than shared; Hollander and Offermann, 1990). In between autocratic decision-making and delegation are power sharing through consultation and shared decision-making, in which employees are asked for opinions before decisions are made or are invited to codecide (Vroom, 2003). Autocratic decision-making involves no involvement of employees in decisions and can hence be described only in terms of obligation to partake in activities according to premade decisions (Hollander and Offermann, 1990). Managers thus announce decisions for employees to heed (Hollander and Offermann, 1990;Vroom, 2003). This form of decision-making allows no employee participation in determining the goals, content, and processes of an organizational intervention. Such minimal employee influence is associated mainly with interventions where these components are preset (Abildgaard et al., 2020). In other words, employee participation is understood primarily in terms of fidelity to and/or compliance with an intervention protocol. The power-sharing-participation-outcome process. Frontiers in Psychology 04 frontiersin.org Consultation involves managers asking for employees' ideas or suggestions (Tangirala and Ramanujam, 2012). Here, employees are given the possibility to influence decisions prospectively and indirectly by giving their views on matters. If managers listen to employee concerns, consultation can enhance the employees' sense of control and confidence in their abilities to influence decisions (Tangirala and Ramanujam, 2012). From an organizational intervention perspective, employee involvement in decisions on a consultation level has been highlighted as a minimal form of participation (Abildgaard et al., 2020). Even if intervention goals are preset, consultation allows for the creation of a better fit between the intervention and the employees' concrete needs and competences and between the intervention and the context in which it takes place (Randall and Nielsen, 2012). For example, employees can contribute with suggestions for how activities can be adapted or adjusted and can provide input on the timing of different activities (von Thiele Schwarz et al., 2016). Shared decision-making occurs when employees are engaged in specific decisions on terms equal to those of their managers (and, in some cases, other stakeholders; Hollander and Offermann, 1990). Here, rather than having decisions delivered to them, the process is viewed as a joint venture for creating value for employees and their organizations (Payne et al., 2008). In this form of decision-making, managers strive for concurrence on decisions, and their role is to act as facilitators who define problems and boundaries (Vroom, 2003). Employee participation in the co-creation of an organizational intervention is often recommended (von Thiele Schwarz et al., 2016). Beyond the benefits in terms of intervention fit, increased employee empowerment is often underscored as an instrument for implementation success (e.g., by contributing to higher levels of engagement and attendance in activities; Nielsen, 2013). Because an increase in employees' control over decisions may contribute to their development, well-being, and productivity, shared decision-making has been suggested as an intervention, or intervention goal, in itself (Theorell, 2003). Delegation is suggested as the power-sharing practice that can enable employee empowerment to the highest degree (Richardson et al., 2021). In delegation, managers allocate decision-making authority to employees as opposed to situations where leaders make decisions either alone or jointly (Cheong et al., 2019). Because delegation emphasizes employees' autonomy and enhanced responsibilities, it is also seen as the direct opposite of autocratic decision-making. Thus, delegation implies moving the authority from one level to another-distributing rather than sharing the power (Leana, 1986). Delegation is highly associated with empowerment and employee development and clearly focuses on participation as a goal in itself (Vroom, 2003;Biron and Bamberger, 2011). Despite this, intervention evaluation studies have seldom suggested delegation as a strategy for enhancing employee participation (Abildgaard et al., 2020). This may be due to the work environment statutes bestowed upon managers, making it necessary for them to retain some level of control over decisions concerning intervention designs. At the same time, a bottom-up approach, where employees initiate and suggest interventions, has been highlighted as a token of true organizational interventions (Tvedt and Saksvik, 2012). In practice, a combination of power-sharing approaches may be, and often are, used. For example, senior management may decide autocratically upon the focus of the intervention (e.g., redistributing workload), but consult and share decisions with employees on how and when the change should be implemented, and delegate responsibility for its implementation. Such combined power-sharing approaches can be time saving initially, but also risk missing the target, as the content may not match what employees' perceive as their primary needs (e.g., Biron et al., 2010). Shifting power-sharing strategies can also contribute to employees' perceiving their managers intentions as confusing, and thus lead to a time-consuming sensemaking process of what mandates exists and what is expected from employees (Schilling et al., 2022). Proposition 1: Employee participation during organizational interventions is dependent upon managers' power-sharing strategy. Organizations should explicitly consider what strategy is most appropriate to use given what they wish to achieve in terms of participation, and ultimately intervention outcomes. Reach of power sharing With the exception of autocratic decision-making, all other powersharing forms can be performed in either a dyadic (manger and employee) or collective (manager and group) manner (Vroom, 2000). In practice, power-sharing practices, especially delegation, generally seem to be more commonly performed on an individual level (Richardson et al., 2021). A reason for this may be that sharing power on a group/collective level demands sufficient time and involves a greater likelihood of disagreements (Vroom, 2003). Additionally, shared decision-making and delegation can be viewed as more delicate and risky than other forms of power sharing because they involve the consideration of more factors (e.g., employee competence and possibilities for job expansion; Richardson et al., 2021). Therefore, managers are more likely to choose employees that they perceive as approachable when distributing power (Leana, 1986). In contrast, power sharing on an individual level during organizational interventions is seldom recommended. Instead, involving all targeted employees is often emphasized (Lehmann et al., 2022). Evidence also suggests that a collective participation process is more effective since it contributes to increased engagement and better team functioning, and thereby influences outcomes to a higher extent . Representative participation in decision-making is also a common phenomenon (Helland et al., 2021). Representative participation in decision-making (i.e., indirect) can be seen as power sharing on an individual level, even though the representative may have involved others before engaging in the decision-making process. For example, a health and safety officer can act as a representative for the employee collective. A representative can participate in the decision-making process to a greater or lesser extent. In turn, they can also involve the employee collective in the process to a greater or lesser extent (Helland et al., 2021). From a managerial perspective, this may be considered a preferable option, especially in large-scale interventions conducted in large organizations, because it helps reduce time, costs, and logistical problems. Still, from the perspective of empowerment and democracy at work, indirect involvement may reduce the chances of achieving the beneficial employee outcomes that could be expected from being a direct part of a shared decision-making process. In addition, the Frontiers in Psychology 05 frontiersin.org success of power sharing through representatives likely depends upon whether the representative involves other employees and whether the representative is viewed appropriately as such (Abildgaard et al., 2020). As Lehmann et al. (2022) has shown, direct participating employees are more likely to experience improvement in intervention outcomes. Contrary, employees that participate indirectly (i.e., through a representative) not only benefit less but can also experience deterioration in intervention outcomes. Implying that not being able to participate directly could have a worsening effect. As the fit of an intervention with employee needs also influences intervention outcomes (Lundmark et al., 2018), indirect or a low degree of direct participation in decisions likely reduces possibilities for such alignment, and thus risk missing targeted objectives. Power sharing is mainly described in literature as phenomenon between first line managers and employees (Vroom, 2003). Line managers are also focused upon in organizational intervention studies, as they often are the ones with the responsibility to transform plans into actions, communicate and follow-up change (Lundmark et al., 2020). However, this suggests that line mangers at some level have a mandate from senior management to craft changes together with employees. Line mangers' prerequisites in terms of such mandate when it comes to organizational intervention initiatives has seldom been discussed. Such trickle-down effects (i.e., from senior management to line managers to employees), have however been concluded important for understanding empowering processes in leadership studies (Byun et al., 2020). Thus, aligning power-sharing processes across organizational levels may also be an important aspect to consider. Proposition 2: Power-sharing that involves all participating employees, and that is aligned across organizational levels, stand a greater chance of reaching high positive impact on employee participation and consequently also on intervention outcomes. Amount of power sharing Too little or too much of a good thing refers to the mechanism that ordinarily produces beneficial outcomes (i.e., in this case, power sharing; Pierce and Aguinis, 2013). That too little power sharing can be detrimental is perhaps no surprise; however, research also has suggested that too much, in the long run, can produce undesirable employee outcomes (Pierce and Aguinis, 2013). Too little power sharing in terms of autocratic decision-making has been concluded to be detrimental to employees because it goes against their basic needs for autonomy and affects their possibilities for growth and development (Theorell, 2003). Employees who lack influence over decision-making can thus feel alienated and withdraw from participating in activities (Tangirala and Ramanujam, 2012). Similarly, it has been argued that if a manager possesses all the power, they risk being overwhelmed by the decisions they need to make, and employees are frustrated by being hindered or slowed down when managers cannot make timely decisions (Theorell, 2003). Others have suggested that under certain conditions, participation may pose greater risks than gains for decision quality. For example, when there is time pressure and/or a high risk of destructive conflicts, managers may have no alternative but to make decisions on their own. Granted that they have sufficient competence to make decisions, it may therefore be a viable option (Vroom, 2003). On the contrary, consultation is often viewed as beneficial for employee empowerment (Tangirala and Ramanujam, 2012). However, employees' contributions are only put into practice if the manager finds them appropriate. Therefore, consultation risks being viewed as power sharing on a pseudo-level, with managers seeking acceptance and justification for their own decisions rather than employee participation in decisions (Biron and Bamberger, 2011). In the long run, employees' unfulfilled expectations of having their suggestions and ideas accepted may instead discourage them from such participatory practices (Hollander and Offermann, 1990). Shared and delegated powers over decisions are clearly participatory approaches to decision-making that have possibilities to contribute to effective and efficient decisions and are associated with employee empowerment (Cheong et al., 2019). However, these approaches are also associated with risks (Norris et al., 2021). It appears that with prolonged exposure, the positive effects of such empowerment disappears over time. This inverted U-shaped relationship between power-sharing actions and employee empowerment has been observed in several studies (e.g., Lee et al., 2017;Richardson et al., 2021). There are also studies showing a direct relation between high degrees of power sharing and unfavorable employee outcomes. For example, Norris et al. (2021) showed that ambitions to empower employees through delegation could instead be perceived as a manager's way of withdrawing from their managerial responsibilities by passing over unwanted tasks, increasing employee resistance. Additionally, having more power also means having more responsibilities and obligations to participate in extra activities outside ordinary tasks. Employees' performances in these activities may also to a greater extent become exposed and targeted for critique (Vroom, 2003). As managerial and employees roles are being blurred, chances of role ambiguity increases (Richardson et al., 2021). Thus, employees can become accountable over areas for which they have less expertise and experience, and thus exceed their capabilities. Such extra tasks can also be perceived by employees as illegitimate (i.e., unnecessary or unreasonable) given their designated roles (Björk et al., 2013). Besides positive outcomes, these participatory approaches to power sharing can therefore have undesirable effects. Rather than empowering, gained power may develop into a burden when employees feel that they cannot meet the challenge or when they see it as unreasonable for them to manage (Cheong et al., 2019;Rosen et al., 2020). That is, having too much to say over matters in which one does not have sufficient competence or support may not always be empowering or developing; it can instead result in stress and depletion (Björk et al., 2013;Rosen et al., 2020). Although this phenomenon is less well documented in organizational intervention literature, there are examples of how employee readiness for participation in interventions influences their perception of managers' behavioral strategy (e.g., Lundmark et al., 2020). Proposition 3: The level and duration of power sharing practices should be balanced against employee prerequisites so that participation becomes empowering rather than a burden. Scope of power sharing In addition to considering the form and balance of participation, it is also important to consider the actual scope of the decisions employees are participating in Biron and Bamberger (2011). Frontiers in Psychology 06 frontiersin.org Richardson et al. (2021) distinguished between two forms of decisions that employees are involved in: job-focused (i.e., core tasks and how and when they are performed) and job-spanning (i.e., strategic, administrative, or operational challenges that require taking over managerial tasks). When the latter occurs, employees' roles are not just enlarged but enriched, and a ground for growth and deep empowerment (as opposed to surface empowerment) is more likely to be established (Biron and Bamberger, 2011;Richardson et al., 2021). Findings have indicated that job-spanning consultation is more empowering than job-focused delegation, and that job-focused consultation is negatively related to employee perceptions of empowerment (Richardson et al., 2021). In organizational intervention literature, employee participation has been discussed in terms of a decision on the content and/or process of an intervention (Abildgaard et al., 2020). When employees are given influence over the content and process of an intervention, they, by definition, have a say in what areas of work should be targeted, what the goal of the intervention should be, and what activities should be included. Such decisions could be considered job-spanning. In contrast, interventions with predefined goals, content, and activities have fewer job-spanning decisions that need to be made and therefore leave room mainly for engaging in job-focused decisions. Participating on decisions concerning the content also means that when power sharing takes place seems to matters. Power sharing at the planning stage has been shown to promote participation throughout the implementation and sustainment of the intervention. It also positively influences intervention outcomes, especially when combined with supportive actions from managers (e.g., following up on delegated tasks; Tafvelin et al., 2019). Thus, from an empowering perspective (Cheong et al., 2019), the cost in time to involve employees at an early stage in job-spanning decision-making may be returned later on in the process as it may enhance motivation and satisfaction, and reduce time spent later on in the dissemination of the intervention. These results are also consistent with suggestions on the importance of involving employees at an early stage to create a fit between intervention content, the context where the intervention takes place, and the people involved (Lundmark et al., 2018). Proposition 4: Power sharing to promote participation during organizational interventions is important from the start (i.e., planning of the intervention) and should preferably be combined with managers' support throughout the process. Power sharing should preferably involve job-spanning, rather than job-focused, decisions to create a better intervention fit. Boundary conditions Power sharing culture Organizational intervention frameworks stress that it is vital to consider where an intervention takes place (e.g., in terms of its cultural context); at the same time, they consistently advocate highly participatory approaches (e.g., Nielsen and Abildgaard, 2013). However, the Western cultural context where these frameworks were developed and evidence was gathered for their support is rather heterogenic (Tvedt and Saksvik, 2012). For example, Irastorza et al. (2016) found that whether employees in Europe had a say in the designs of interventions focusing on the psychosocial work environment differed depending by country, with Nordic country employees having the most influence on such matters. In Nordic countries, self-governed work groups with egalitarian cultures are common and may thus provide favorable platforms for sharing power and for conducting organizational interventions using a participatory approach (Theorell, 2003). Conversely, in previous attempts, applying frameworks developed in cultures other than the ones addressed has posed problems due to the poorer fit with views of power sharing (Tvedt and Saksvik, 2012). Hofstede (1980) introduced the concept of power distance as an important distinguishing determinant of management in different countries. Hofstede (1980) argued that distance in power, "the extent to which a society accepts the fact that power in institutions are distributed unequally" (p. 6), could explain whether an autocratic decision-making or a participatory approach is present. Thus, the concept postulates that organizations in countries or cultures with high power distances, more autocratic management styles are generally preferred. In contrast, in countries and cultures with low power distances, more participatory management styles are generally favored (Javidan et al., 2016). Similarly, Triandis (1994) concluded that the orientation of cultural values in a country would determine its power-sharing profile. Individualist countries tend to put more emphasis on freedom and challenges, whereas collective cultures favor security, obedience, group harmony, and duty (Triandis, 1994). For example, Newman and Nollen (1996) studied how power-sharing practices improved the profitability of work units in different countries. They found high degrees of power sharing were effective in countries with relatively low power distances but did not affect profitability in cultures with high power distances. In other words, different power-sharing practices are contingent upon what is culturally acceptable, which also suggests that they are more or less effective depending on where an intervention takes place. However, from the perspective of empowerment and democracy at work, national culture is argued to be of less importance, and a non-participative approach is viewed as problematic regardless of where it appears. Autocracy simply stands in the way of developing employee autonomy and control (Rothschild, 2000). In turn, evidence has suggested that a lack of autonomy and control constitute a link to poor employee well-being and performance (Theorell, 2003). Thus, given sufficient time to introduce a participatory approach, advocating heightened employee latitude in decision-making can be an intervention in itself, even in cultures with high power distances (Budd et al., 2018). Democratization of the workplace, through genuine employee participation in decisions, can thus be seen as a profound goal to strive for under any circumstances as an ethical imperative that is (at least long-term) also sound for employee well-being and performance (Sashkin, 1984;Foley and Polanyi, 2006). Additionally, instead of viewing power sharing during organizational interventions as an effect of democratic culture, promoting democracy through participation can have a cascading effect that inspires change in a wider organizational and societal democratic process (Budd et al., 2018). Proposition 5: When considering power-sharing strategies for designing and implementing organizational interventions, power distance culture should be taken into account. Capacity for power sharing In decision-making literature, time is often considered a vital factor in determining the form of power-sharing practice (e.g., Richardson et al., 2021). Engaging employees to participate in the Frontiers in Psychology 07 frontiersin.org decision-making process naturally takes time, increasingly so based on the amount of power being shared. For example, a shared decision process with ambitions to achieve consensus is likely time consuming, especially if there are conflicting opinions. Similarly, some decisions may have short deadlines or may be connected to a crisis, leaving little room for employee participation in decision-making (Vroom, 2003). Highly participatory forms of power sharing that are not accompanied by sufficient participative time may thus be counterproductive. Managers under time pressures may have to switch to less participatory forms or end up with low-quality decisions because the process is rushed. Rather than contributing to the empowerment and development of employees, the lack of fit of time with the process may instead be experienced as a stressor and may contribute to adverse outcomes (Björk et al., 2013). The success of organizational interventions in which participation is deemed a goal in itself, and thus highly participatory forms of power sharing are necessities, are therefore likely dependent upon having ample time to process decisions. The amount of time needed for different forms of power sharing may, in turn, be contingent upon other factors, for example, managers and employees' readiness for participating in shared and distributed decision processes (Yang, 2015). At lower stages of readiness, autocratic decision-making can initially outperform more participatory forms in terms of time and quality, because the individual employees and teams are uncertain about what is expected of them (Lorinkova et al., 2013). Competence (i.e., in terms of knowledge and experience) among both managers and employees is a vital ingredient for readiness, and has been concluded to be significant for high-quality decision outcomes in power-sharing processes (Vroom, 2003). Competence here refers to both procedural and content competence and can be seen as a central component for both managers and employees' readiness for intervention participation. That is, competence in exercising power to different degrees (e.g., knowledge of the delegation process) and competence concerning the content of decisions (e.g., organizational intervention designs and activities). A high level of trust in a manager's competence contributes to power sharing being perceived as propitious by employees (Norris et al., 2021). Conversely, employee perceptions of low competence in their managers tend to result in adverse evaluations of their power-sharing practices (Norris et al., 2021). Similarly, managers who perceive that employees lack competence and trustworthiness will be reluctant to share power with them because the quality of decisions may be reduced. In such cases, autocratic decision-making or consultation may be more tempting alternatives, especially if time is of the essence (Vroom, 2003). On the other hand, involvement in participatory interventions can be a lesson in itself, and as individuals and teams develop, so does their readiness for effective participation in decisionmaking. Thus, over time, with increased experience, clarification of roles, and commitment to a shared mission, more time-consuming forms of power sharing can be performed more rapidly and produce higher quality decisions. However, this demands investments in time (Coffeng et al., 2021). Similarly, team climate (i.e., norms, attitudes, and expectations that are perceived by team members; Schneider, 1990), is often mentioned in conjunction with decision-making processes as a strongly contributing factor for decision effectivity and quality (Coffeng et al., 2021). For example, teams who actively participate in decision-making develop trustful relationships and commitments to the team goals (e.g., Costa et al., 2001). Team climate has been researched both as a mediatory outcome of managers' power sharing that, in turn, influences the effectiveness and quality of team decisions (Coffeng et al., 2021) and as a moderator that influences the powersharing-employee behavioral process (Cheong et al., 2019). Conversely, Vroom (2003) suggested that using a participatory approach when competence is low and time is short instead may influence team climate negatively and consequently reduce decision effectivity and quality. From this, a vicious circle can develop, where disagreements and destructive conflicts appear which further reduce a team's decision-making effectivity and quality, and hinder future ambitions for participatory decision processes (Vroom, 2003). Proposition 6: For a participatory process to be realized, sufficient capacity for a high degree of power-sharing practices must be in place. For example, in terms of allocated time, managers' and employees' competence, and team climate. Over time, participatory power-sharing practices can increase capacities. Discussion The purpose of this paper was to introduce a power-sharing perspective on employees' participatory influence over organizational interventions. Although preferred ways of power sharing are often implicitly suggested in intervention literature (e.g., by focusing on co-creation), guidance for understanding different forms of power sharing are and what needs to be in place for them to be effective is sparse. In the paper, six propositions are made to sum up the conclusions that can be drawn from the literature. These propositions are intended to help guide researchers and practitioners interested in how power-sharing strategies influence participation, and how different approaches and boundary conditions may influence the power-sharing-participation-intervention outcome process, see Figure 1. Culture and Capacity are here depicted as potential antecedents to the choice of strategy, but also boundary conditions in the relation between power sharing strategy and employee participation. At the same time, as a high degree of employee participation may influence both culture and capacities, they could also be viewed as outcomes of a high degree of employee participation. For example, fostering employee participation may over time improve decision making in teams, and thereby enhance team-climate and decision quality, as well as reduce time for decisions. The three approaches (reach, amount, and scope) function as moderators in the power sharing strategy-employee participation relation, as they can increase or reduce the influence of the different strategies on employee participation. Two perspectives are clearly present in an examination of powersharing literature. One suggests that levels of power sharing should depart from analyzing contextual conditions, such as the surrounding culture, what time is given, and the competence of employees (Vroom and Jago, 2007). Here, decision effectivity and quality are often seen as primary outcomes, and employee participation in decisions to various degrees as means for reaching these outcomes (Vroom, 2003). The other perspective suggests that heightened employee latitude in decision-making enhances employee outcomes (i.e., in terms of wellbeing and performance) and therefore always should be advocated Frontiers in Psychology 08 frontiersin.org (Theorell, 2003). In recent research, attempts have been made to combine these perspectives (e.g., Biron and Bamberger, 2011;Richardson et al., 2021). Although the empowerment of employees may have favorable outcomes, it is clear that empowerment processes are also dependent upon conditions, which contribute to sometimes making those processes burdens rather than possibilities for development (Cheong et al., 2019). However, this does not necessarily mean that relying on less empowering forms of power sharing is required. It could instead suggest that clearly stating employee participation in decision-making as part of the goal of an intervention is important, because that will help determine what prerequisites need to be in place before initiating organizational interventions, for example, by allocating a sufficient amount of time given the competence and climate of a team for a participatory process. It could also suggest that organizations to a higher degree should consider what kind decisions are shared (i.e., job-spanning or job-focused) and what support is given to employees exercising allocated power. As mentioned in the introduction, the aspects considered in this paper are not meant to be exclusive or exhaustive but rather a starting point for further explorations. However, one closely related factor to power sharing that may also be worth considering is leadership. Most leadership theories, implicitly or explicitly, includes features of power sharing strategies (Cheong et al., 2019). For example, specific sets of leadership behaviors focused on the development of employees through challenges (i.e., intellectual stimulation) is a central aspect of Transformational leadership that is closely linked to power sharing (Bass and Riggio, 2006). In Empowering leadership theory (Cheong et al., 2019), a high degree of power sharing (i.e., through delegation) is also considered a central component for achieving high levels of engagement among employees. Contrary, autocratic leadership styles, such as Abusive supervision (Tepper, 2000), are associated with low degrees of power sharing. From this, advocating a constructive leadership style, in general, and specifically in the context of organizational interventions, has been shown beneficial for intervention outcomes (Lundmark et al., 2020). Implications and future directions for research Introducing power sharing as a complementary perspective to employee participation can broaden the understanding of why and when organizational interventions are successful or not. Process evaluations, including assessments of participation, are widely used for answering such questions (Nielsen and Noblet, 2018). They are also used to be able to make adaptions to a process as it evolves (von Thiele Schwarz et al., 2016). By including assessments of how, what, when, and to what degree, managers power share in the intervention process may thus further facilitate the understanding of mechanisms contributing to success or failure. For example, if a participatory approach is used for designing and implementing an organizational intervention, managers' initiation of co-creation and/or power distribution can be evaluated. This can help explain why participation is present or not and can facilitate problem-solving during implementation if participation is present to a lesser degree than intended. Such assessment tools could likely be adopted from literature on empowering leadership and decision-making (for an overview, see Cheong et al., 2019) and adapted to an intervention context. Furthermore, researchers could also benefit from considering power sharing in research-driven intervention designs. For example, if the content is more or less predecided, can empowering forms of power sharing in decision-making still be introduced? Can some decisions be performed at a consultation level and others be delegated, and how are such changes in strategies understood? How can the role ambiguity that may come with a role expansion be mitigated? Answering such questions and examining the effects of the powersharing strategies applied could further help advance the understanding of what is appropriate, for whom, when, and to what extent, in line with calls for a better understanding of the process (Nielsen et al., 2010). Practical implications From a managerial standpoint, knowing that although inviting employees into the participatory decision process may consume time and effort, it can also contribute to improving an intervention's design and hence its outcomes. It can also have cascading effects, for example, in terms of increasing employee autonomy and control; building positive relationships that improve team climate; aligning individual, team and organizational goals; and enhancing commitment to the organization. Thus, elevating employee latitude in decision-making on designing and implementing organizational interventions contributes both directly and indirectly to achieving the objectives of the intervention. However, managers must balance participatory ambitions with contextual considerations (e.g., the time and competence at hand). If these ambitions do not align with such prerequisites, there is instead a substantial risk of detrimental outcomes (Norris et al., 2021). In sum, managers should carefully consider the objectives of an intervention and what level of employee participation will contribute to reaching these objectives. They may also want to consider additional gains for the organization by establishing different power-sharing practices (e.g., promoting democracy at work). The objectives must then align with the power-sharing strategies (Vroom and Jago, 1988). If not, employees may experience being misled (e.g., having a say in issues that do not matter or their decisions being neglected), with the potential failure of the intervention and hampered motivation to participate in future initiatives as results. Finally, embarking on a participatory power-sharing process without aligning such a strategy with sufficient contextual prerequisites is a road to failure (Richardson et al., 2021). Hence, assessing the preconditions and influential contingent factors to make sure that they are acknowledged in planning (e.g., in terms of time, activities, and support) is vital. Considering the complexity of the above, if participatory organizational intervention approaches are to be encouraged, organizations need to train managers in essential power-sharing skills for achieving meaningful participation. Such educational activities must also contribute to managers' abilities to determine the necessary preconditions to be fulfilled, the conditions and support needed during the intervention's implementation, and the knowledge needed about the potential pitfalls of the different power-sharing paths. The training of managers should also involve how to shift strategies consciously to avoid too much of a good Frontiers in Psychology 09 frontiersin.org thing, for example, as Richardson et al. (2021) suggested, to shift between job-focused delegation and job-spanning consultation and delegation to correspond with employees' needs for both empowerment and control. At the same time, to this in a way that is not perceived as inconsistent may be a challenge, and ways of introducing such strategies could be benefited from simultaneous or joint employee training. For example, training in power-sharing practices could very well be more functional if managers and their teams learn together, which perhaps also can facilitate the transfer of such skills to practice. Conclusion In this study, the concept of power sharing was explored in relation to the designs, implementations, and outcomes of organizational interventions. Although power-sharing practices are determinants of employee participation, an often considered central aspect of organizational interventions, they have seldom been the focus of attention in intervention literature. By departing from a power-sharing perspective, implications that this may have for organizational interventions were conceptually examined. Thereby, this study hopefully contributes to building a platform for future examinations of the power-sharing concept in organizational intervention contexts. From a practitioner viewpoint, understanding the importance of aligning power-sharing forms with participants decision needs is stressed. To achieve such a fit, factors that can facilitate or hinder the power-sharing process must also be considered. Furthermore, managers must be given appropriate training in how to determine and implement different power-sharing strategies, and supplied with adequate support for realizing participatory decisionmaking practices. Author contributions The author confirms being the sole contributor of this work and has approved it for publication. Funding This work was supported by FORTE -Swedish Research Council for Health, Working Life and Welfare under Grant 2019-00066. Conflict of interest The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-07-15T15:04:50.952Z
2023-07-13T00:00:00.000
{ "year": 2023, "sha1": "94e44fbabad276d583fbaab9c268b52654eb1993", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1185735/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf78c0225870533a282b1b524583e95dbf1ae079", "s2fieldsofstudy": [ "Business", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
256605152
pes2o/s2orc
v3-fos-license
The cyclohexene derivative MC-3129 exhibits antileukemic activity via RhoA/ROCK1/PTEN/PI3K/Akt pathway-mediated mitochondrial translocation of cofilin The effects of MC-3129, a synthetic cyclohexene derivative, on cell viability and apoptosis have been investigated in human leukemia cells. Exposure of leukemia cells to MC-3129 led to the inhibition of cell viability and induction of apoptosis through the dephosphorylation and mitochondrial translocation of cofilin. A mechanistic study revealed that interruption of the RhoA/ROCK1/PTEN/PI3K/Akt signaling pathway plays a crucial role in the MC-3129-mediated dephosphorylation and mitochondrial translocation of cofilin and induction of apoptosis. Our in vivo study also showed that the MC-3129-mediated inhibition of the tumor growth in a mouse leukemia xenograft model is associated with the interruption of ROCK1/PTEN/PI3K/Akt signaling and apoptosis. Molecular docking suggested that MC-3129 might activate the RhoA/ROCK1 pathway by targeting LPAR2. Collectively, these findings suggest a hierarchical model, in which the induction of apoptosis by MC-3129 primarily results from the activation of RhoA/ROCK1/PTEN and inactivation of PI3K/Akt, leading to the dephosphorylation and mitochondrial translocation of cofilin, and culminating in cytochrome c release, caspase activation, and apoptosis. Our study reveals a novel role for RhoA/ROCK1/PTEN/PI3K/Akt signaling in the regulation of mitochondrial translocation of cofilin and apoptosis and suggests MC-3129 as a potential drug for the treatment of human leukemia. Introduction Chemotherapy is one of the most important treatments for cancer. A principal obstacle to the clinical efficacy of chemotherapy is the potential toxicity to normal tissues of the body and the development of drug resistance 1,2 . To overcome these obstacles, the design and discovery of efficient and safe chemical agents for the treatment of cancer is the primary objective of contemporary medicinal chemistry. Recently, asymmetric organocatalysis has been successfully applied to synthesize new chemical derivatives based on the bioactivity and mechanism of anticancer activity 3 . In recent years, cyclohexene derivatives containing chiral primary amines have received considerable attention because of their diverse chemotherapeutic potential, including versatile anticancer activities. Previous studies have reported that 2-cyclopentenones and 2-cyclohexenone in the presence of chiral primary amines exhibited promising activity against some cancer cell lines, thus indicating that such skeletons might serve as leads in drug discovery 4 . Synthetic tricyclic pyranopyrones with simple aromatic substituents possess anticancer properties 5 . Compound ZJ-101, containing a cyclohexenyl ring, also exhibited anticancer activity 6 . ST7612AA1, a new generation of HDAC inhibitors, exhibited potential activity against a broad panel of cancer cell lines and in vivo tumor models 7 . These reports indicated that synthetic cyclohexene derivatives might show potential as chemotherapeutic agents for the treatment of human cancer. A specific pharmacological mechanism is important for further drug development. Increasing evidence has revealed that the high incidence of Rho-associated coiledcoil-containing protein kinase 1 (ROCK1) overexpression in human tumors suggests that this kinase is important in the carcinogenic process and therefore is a potential target for therapeutic intervention 8 . ROCK1 belongs to a family of serine/hreonine kinases activated by Rho GTPases or caspase-3 via cleavage of the C-terminal autoinhibitory domain from the kinase active site 9 . ROCK1 is of significant interest in drug discovery, owing to its fundamental role in vital signal transduction pathways central to many essential cellular activities, including cell death and survival 10 . Several ROCK1 inhibitors are involved in the regulation of cell death and survival through distinct mechanisms (i.e., Mcl-1 phosphorylation and PTEN/PI3K/Akt signaling pathway) 11,12 . In addition to biological studies, computer modeling is an important tool for target identification, since the 3D structures of certain targeted receptor(s) could be constructed by homological modeling, and the binding poses and binding affinities of the ligand and receptor could be predicted by molecular docking 13 . Homology modeling refers to constructing an atomic-resolution model of the "target" protein from its amino acid sequence and an experimental three-dimensional structure of a related homologous protein. Docking involves fitting virtual ligands, typically derived from large virtual libraries, into targeted binding sites employing computer algorithms. A well-studied virtual ligand-protein interaction complex model is an essential prerequisite toward the design and subsequent optimization of novel bioactive compounds, including new anticancer agents 14 . In a recent study, we reported nine new cyclohexene derivatives, synthesized through exo-Diels-Alder and redox reactions, which exhibited a different degree of anticancer activities 4 . Here, we found a new cyclohexene derivatives, named MC-3129, also exhibited potent cytotoxic effects against several human cancer cell lines. The molecular mechanism of MC-3129-mediated apoptosis MC-3129 reduces cell viability in leukemia cells and other cancer cell lines The chemical library was assembled with processing series of synthetic strategies on asymmetric organocatalysis over a number of years. Nearly, 3000 small molecules were screened for their anti-proliferative activity on cancer cells. Among these molecules, MC-3129, a cyclohexene derivative synthesized though exo-Diels-Alder and redox reactions, exhibited the most potent cytotoxic effects against human leukemia U937 cells (Table 1 and Figure S1). In addition to the U937 cell line, other three leukemia cell lines (Jurkat, HL-60, and K562) and five solid tumor-derived cell lines, including A549 (non-small The cell apoptosis was determined by flow cytometry using Annexin V/PI staining, the percentage of apoptotic cells was analyzed for three separate experiments (mean ± SD, ns P > 0.05, **P < 0.01 compared with control). c, d Whole cell lysates, cytosolic (Cytosol) and mitochondrial (Mito) fractions from U937 cells were prepared and subjected to immunoblotting using antibodies against PARP, cleaved-caspase-3 (C-Caspase-3), cleaved-caspase-9 (C-Caspase 9), cytochrome c (Cyto c), GADPH, and Cox IV. Jurkat, HL-60, and K562 cells were treated with or without 10 μM of MC-3129 for 24 h. e The cell apoptosis was determined by flow cytometry, values represent the means ± SD, for three separate experiments (**P < 0.01). f The total protein lysates were analyzed by immunoblotting using the indicated antibodies. CF cleavage fragment cell lung cancer), SMMC-7721 (hepatocellular carcinoma), Eca109 (esophageal carcinoma), DU145 (prostate carcinoma), and MDA-MB-231 (breast adenocarcinoma), were tested for the cytotoxic effects of MC-3129. As shown in Table 1 and Figure S1, MC-3129 exhibited inhibitory effects on cell viability in a dose-dependent manner in these cancer cell lines. MC-3129 induces apoptosis in human leukemia cells We next examined the effects of MC-3129 on apoptosis in human leukemia cell lines. Flow cytometry analysis revealed that the exposure of U937 cells to MC-3129 resulted in a significant increase in apoptosis in dose-and time-dependent manners (Fig. 1a, b). Consistent with these findings, the same MC-3129 concentrations and exposure intervals caused the cleavage/activation of caspase-9 and caspase-3 and degradation of PARP. These events were also accompanied by significant increases in the release of cytochrome c from mitochondria into the cytosol (Fig. 1c, d). To determine whether these events were restricted to myeloid leukemia cells, parallel studies were performed in Jurkat, HL-60, and K562 leukemia cell lines. These cells exhibited the apoptotic effects of MC-3129 similar to those observed in U937 cells (Fig. 1e). Additionally, Jurkat, HL-60, and K562 cells exhibited comparable degrees of caspase-9 and caspase-3 activation, PARP degradation, as well as cytochrome c release (Fig. 1f). Recent evidence has indicated that the dephosphorylation/mitochondrial translocation of cofilin is crucial for the initiation of mitochondrial injury-mediated apoptosis 16,17 . We next investigated whether MC-3129 could affect the dephosphorylation/mitochondrial translocation of cofilin during the initiation of apoptosis. Treating cells with MC-3129 decreased the levels of phospho-cofilin (Ser3) in whole cell lysate, increased the levels of cofilin in mitochondria and decreased the levels of cofilin in the cytosol in dose-and time-dependent manners (Fig. 2b). Similar results were also obtained in other cancer cell lines ( Figure S2A). To futher determine whether the phosphorylation status of cofilin could influence its ability to translocate to mitochondria and induce apoptosis mediated by MC-3129, two cofilin mutants were generated that mimick either the dephosphorylated or phosphorylated forms by changing Ser3 to alanine (active; S3A) or glutamic acid (inactive; S3E) as described previously 17 . Overexpression of cofilin S3A enhanced, whereas cofilin S3E abolished, the mitochondrial translocation localization of cofilin (Fig. 2c). Furthermore, cofilin S3A enhanced, whereas cofilin S3E reduced cytochrome c release, caspase-3 activation and apoptosis mediated by MC-3129 (Fig. 2d, e). Thus, these findings suggest that MC-3129-mediated dephosphorylation of cofilin (Ser3) is required for the mitochondrial translocation of cofilin and induction of apoptosis. Exposure of MC-3129 activates RhoA/ROCK1/PTEN and inactivates of PI3K/Akt The effects of MC-3129 on U937 cells were examined in relation to changes in various signal transduction pathways implicated in the regulation of apoptosis. The exposure of U937 cells to MC-3129 decreased ROCK1 levels and increased ROCK1 cleavage in dose-and timedependent manners (Fig. 3a). Similar results were also obtained in other cancer cell lines ( Figure S2B). Treating cells with MC-3129 also resulted in discernable increases in the levels of phospho-PTEN and decreases in the levels of phospho-PI3K and phospho-Akt in dose-and timedependent manners (Fig. 3a). Similar results were also obtained in other cancer cell lines ( Figure S2B). Furthermore, MC-3129 exposure induced dose-and timedependent increases in the GTP-activated form of RhoA (Fig. 3b). Inhibition or knockdown of ROCK1 abrogates the MC-3129-mediated dephosphorylation and mitochondrial translocation of cofilin and induction of apoptosis To further assess the functional significance of ROCK1 activation in regulating the dephosphorylation and mitochondrial translocation of cofilin and induction of apoptosis, a ROCK1 inhibitor Y27632 was employed. Pretreating cells with Y27632 decreased MC-3129mediated ROCK1 cleavage/activation, PTEN activation, and Akt inactivation (Fig. 4a). Pretreatment with Y27632 also decreased the MC-3129-mediated dephosphorylation and mitochondrial translocation of cofilin (Fig. 4b). Furthermore, pretreatment with Y27632 attenuated MC-3129-mediated apoptosis (Fig. 4c), accompanied by decreases in the degradation of PARP, cleavage/activation of caspase-9 and caspase-3, as well as the release of cytochrome c into the cytosol (Fig. 4d). To further confirm the functional role of ROCK1 in MC-3129-mediated dephosphorylation and mitochondrial translocation of cofilin and apoptosis, a lentivirus shRNA approach was used to stably knockdown ROCK1 expression. The infection of U937 cells with ROCK1 shRNA reduced the expression of ROCK1 and resulted in blockage in MC-3129-mediated ROCK1 cleavage/activation, PTEN activation and Akt inactivation (Fig. 5a). The knockdown of ROCK1 also attenuated the MC-3129-mediated dephosphorylation and mitochondrial translocation of cofilin (Fig. 5b). In addition, the knockdown of ROCK1 attenuated MC-3129-mediated apoptosis (Fig. 5c), degradation of PARP, cleavage/activation of caspase-9 and caspase-3, as well as the release of a Whole cell lysates were prepared and subjected to western blot analysis using antibodies against Mcl-1, phospho-Bad (p-Bad), Bad, Bcl-2, Bcl-xL, and GADPH. The cytosolic (Cytosol) and mitochondrial (Mito) fractions were also prepared and subjected to western blot analysis using antibodies against Bax, GADPH, and Cox IV. b Whole cell lysates, cytosolic (Cytosol), and mitochondrial (Mito) fractions were analyzed by western blot assay using antibodies against phospho-cofilin (pcofilin), cofilin, GADPH, and Cox IV. U937 cells were transfected with control empty vectors or pseudophosphorylated (inactive, S3E) mutant plasmids or human cofilin dephosphorylated (active, S3A) mutant plasmids for 48 h, and then were treatment with 10 μM MC-3129 for 24 h. c, d Whole cell lysates and mitochondrial fractions were determined by immunoblotting. e The cell apoptosis was determined by flow cytometry using Annexin V/PI staining, the percentage of apoptotic cells was analyzed for three separate experiments (mean ± SD, **P < 0.01) cytochrome c into the cytosol (Fig. 5d). Taken together, these findings indicate that activation of ROCK1 played an important functional role in the MC-3129-mediated dephosphorylation and mitochondrial translocation of cofilin and apoptosis. MC-3129 inhibits tumor growth in a U937 xenograft mouse model To determine whether our in vitro findings could be replicated in vivo, a U937 cell xenograft tumor growth model was employed. Nude mice were subcutaneously inoculated with U937 cells, followed by injections with vehicle or MC-3129 (10 and 50 mg/kg, i.p.) for 30 days starting 3 days after tumor inoculation. As shown in Fig. 6a, b, treatment of nude mice with MC-3129 resulted in a significant suppression of tumor growth after 15 days of drug exposure. The average tumor volumes for the 10 and 50 mg/kg MC-3129 treatment groups were 356.2 ± 88.9 mm 3 and 205.5 ± 64.2 mm 3 , respectively, compared to that for the vehicle control group (433.5 ± 110.6 mm 3 ) (P < 0.05 or P < 0.01). These effects became more apparent after 20 days of drug exposure. At 30 days after drug exposure, the average tumor volumes were strongly reduced by~18% and 50% at two concentration levels (10 and 50 mg/kg, respectively) compared with that of the vehicle control group (P < 0.01). The results of Kaplan-Meier survival analysis showed that the survival rate of the 50 mg/kg MC-3129 group were higher than that of the vehicle control group during the 30 days treatment (P < 0.05) (Fig. 6c). In contrast, the treatment of nude mice with MC-3129 did not cause significant changes in body weight (Fig. 6d) or other signs of potential toxicity, such as agitation, impaired movement and posture, indigestion, or diarrhea. These results suggest that MC-3129 exhibited more potent inhibitory effects on tumor growth without any acute toxicity. To clarify whether the inhibition of tumor growth is solely due to the inhibition of cell viability, we investigated the effects of MC-3129 on apoptosis in tumor tissues using H&E staining, TUNEL, and immunohistochemical assays. The sections of U937 xenografts from mice treated with MC-3129 exhibited a reduced number of cancer cells, with signs of necrosis with inflammatory cell infiltration and fibrosis (Fig. 6e, top panels). Exposure to MC-3129 resulted in a striking induction of apoptosis in tumor cells, with signs of numerous dark brown-colored apoptotic cells (Fig. 6e, middle panels). In addition, treatment with MC-3129 caused a rapid increase in immunoreactivity for cleaved caspase-3, which was indicative of apoptosis (Fig. 6e, bottom panels). To further evaluate whether the interruption of the ROCK1/PTEN/Akt signaling pathway is involved in MC-3129-induced apoptosis in vivo, we performed western blot analyses. As shown in Fig. 6f, the treatment of nude mice with MC-3129 resulted in decreased ROCK1 levels and increased ROCK1 cleavage. Treatment with MC-3129 also increased the levels of phospho-PTEN and decreased the levels of phospho-PI3K, phospho-Akt, and phosphocofilin in whole cellular lysates of tumor tissues. Such findings suggest that the interruption of the ROCK1/ PTEN/PI3K/Akt signaling pathway could contribute to MC-3129-mediated apoptosis and antileukemic effects in vivo. Target prediction of MC-3129 by homology modeling and molecular docking Ten GPCRs, including lysophosphatidic acid receptor 1, 2 (LPAR1-2), sphingosine 1-phosphate receptor 1, 2, 3 (S1PR1-3), G-protein-coupled receptor 132 (GPR132), Gprotein-coupled receptor 116 (GPR116), type-1 angiotensin II receptor (AGTR1), alpha-1A adrenergic receptor (ADRA1A), and B2 bradykinin receptor (BKRB2), were selected as the potential target proteins. The structures of S1PR1, AGTR1, and LPAR1 were obtained from the RCSB Protein Data Bank with ID numbers 3VZY, 4YAY, and 4Z35, respectively (Fig. 7a). Homology modeling was used to construct the remaining GPCRs by using different templates, as shown in Table S1, by Modeler 9.12. Surflex-Dock from SYBYL-X 2.0 was used for docking 12 selected compounds, including both active and inactive compounds (Table S2), into these GPCRs. We selected docking scores greater or equal to 6.0 as the cutoff values, according to SYBLY software, as previously described. The docking results of these compounds with the ten GPCR models are shown in Table S3. Only the LAPR2 model could distinguish active and inactive compounds (Fig. 7b). However, MC-3129 was precisely docked into fractions were prepared and subjected to western blot analysis using antibodies against ROCK1, p-PTEN, PTEN, p-Akt, Akt, p-cofilin, cofilin, GADPH, and Cox IV. c, d Apoptosis was determined by flow cytometry with Annexin V/PI staining and total protein lysates were analyzed by immunoblotting using the indicated antibodies. Error bars represent the means ± SD for three separate experiments. **P < 0.01 the binding pocket with an obvious hydrogen bond between LAPR2 and MC-3129 (Fig. 7c). These results suggested that the LPAR2 could be a potential target for these compounds. Therefore, we suggested that MC-3129 might activate the RhoA/ROCK1 pathway by targeting LPAR2 (Fig. 7d). Discussion In this study, we demonstrated that the cyclohexene derivative MC-3129 specifically reduces the viability of human hematological and solid tumor cell lines and exhibits antileukemic activity in vivo without side effects. Such a broad-spectrum anticancer agent with low toxicity is considered promising for the development of anticancer therapies. We also observed that MC-3129 selectively induces apoptosis in leukemia U937 cells in dose-and time-dependent manners. To better characterize this molecule, we investigated the detailed mechanism of MC-3129-induced cell death in U937 cells. Cofilin is a member of the ADF/cofilin family, which regulates actin dynamics by increasing the rate of actin depolymerization 18 . Cofilin not only serves as an actin-depolymerizing factor, but also plays crucial roles in various cellular activities (i.e., apoptosis) 19 . Recent evidence has indicated that the mitochondrial translocation of cofilin is an early step in apoptosis induction 20 . The translocation of cofilin to mitochondria is necessary for the opening of the mitochondrial permeability transition pore and subsequent release of cytochrome c. Only dephosphorylated cofilin translocated to mitochondria, resulting in cytochrome c release and apoptosis 21 . Consistent with these reports, the dephosphorylation and mitochondrial translocation of cofilin is necessary for MC-3129-mediated cytochrome c release and apoptosis based on the following findings. First, after MC-3129-induced apoptosis, cofilin was translocated from the cytosol to mitochondria prior to the release of cytochrome c. Second, MC-3129 treatment decreased the levels of phosphorylated cofilin. Third, dephosphorylated cofilin enhanced, whereas phosphorylated cofilin attenuated apoptosis mediated by MC-3129. Such findings suggest that the MC-3129-mediated dephosphorylation of cofilin (Ser3) is required for the translocation of cofilin to mitochondria, leading to cytochrome c release and apoptosis induction. Total cellular extracts, cytosolic (C) and mitochondrial (M) fractions were prepared and subjected to western blot analysis using anti-ROCK1, p-PTEN, PTEN, p-Akt, Akt, p-cofilin, cofilin, GADPH, and Cox IV. c, d Apoptosis was measured by flow cytometry using Annexin V/PI staining, and total protein lysates were analyzed by immunoblotting using the indicated antibodies. Error bars represent the means ± SD for three separate experiments. **P < 0.01 Our results provide detailed information on the molecular mechanisms by which MC-3129 induces apoptosis in human leukemia cells (i.e., by activation of RhoA/ ROCK1/PTEN and inactivation of PI3K/Akt). Rho kinase (ROCK) belongs to a family of serine/threonine kinases activated via interactions with Rho GTPases. ROCK is involved in a wide range of fundamental cellular functions, such as contraction, adhesion, migration, proliferation, and apoptosis 22 . The high incidence of overexpression of ROCK1 in human tumors suggests that this kinase is important in the carcinogenic process and therefore may be a potential target for therapeutic intervention. Recent studies have shown that ROCK1 plays an important role in the regulation of apoptosis in various cell types and animal disease models. ROCK1 activity can be regulated by several distinct mechanisms (i.e., RhoAor caspase-3-dependent cleavage/activation of ROCK1) 23 . We show here that the pan-caspase inhibitor z-VAD-fmk failed to prevent MC-3129-mediated ROCK1 cleavage/ activation. However, depletion of RhoA with shRNA could attenuate MC-3129-mediated ROCK1 activation. Such findings suggest that MC-3129-mediated ROCK1 activation is RhoA-dependent. As previously reported, RhoA is a proximal downstream effector of numerous GPCRs 24,25 . Computer modeling revealed that MC-3129 might activate the RhoA/ROCK1 pathway by targeting LPAR2. Several ROCK substrates are involved in the regulation of cell death and survival 26 . Phosphatase and tensin homolog (PTEN) is a newly identified ROCK substrate 27 , Fig. 6 MC-3129 inhibits tumor growth and induces apoptosis in a U937 xenograft animal model. Thirty-six nude mice (5 weeks old) were inoculated subcutaneously with U937 cells (2 × 10 6 cells/mouse) and randomly divided into three groups (12/group) for treatment with MC-3129 (10 mg/kg, 50 mg/kg, i.p., five times per week) or with vehicle control solvent as described in the "Methods" section. Tumor growth was measured once every 5 days, and tumor volume (V) was calculated as V = lw 2 /2. a, b Average tumor volume and gross appearance in the vehicle control group and the MC-3129 treatment groups. Error bars represent the means ± SD. *P < 0.05 or **P < 0.01 compared with control. c The Kaplan-Meier survival curve of the control group and the MC-3129 treatment groups during the 30 days of treatment. ns P > 0.05, **P < 0.01. d Body weight changes of mice during the 30 days of study. Statistical analysis of body weight changes showed no significant differences between the MC-3129 treatment and vehicle control groups. e Tumor tissues were sectioned and subjected to H&E staining, TUNEL assay, and immunohistochemistry for evaluating histological morphology, apoptosis, and expression of C-Caspase-3. f Whole cell lysates were prepared and subjected to western blot analysis using antibodies against ROCK1, p-PTEN, PTEN, p-PI3K, PI3K, p-Akt, Akt, p-Cofilin, Cofilin, and GADPH and phosphorylation by ROCK stimulates the phosphatase activity of PTEN. PTEN dephosphorylates both proteins and phosphoinositides and negatively regulates the activities of the phosphatidylinositol (PI) 3-kinase/Akt pathway, which plays important roles in a diverse range of biological processes, including cell survival and apoptosis 28,29 . Based on our results, we speculate that the PTEN/ PI3K/Akt signaling cascade acts downstream of the RhoA/ROCK1 pathway during the MC-3129-mediated mitochondrial translocation of cofilin and induction of apoptosis. First, treating U937 cells with MC-3129 induced the activation of RhoA/ROCK1/PTEN and inactivation of PI3K/Akt. Second, pretreatment with ROCK1 inhibitor Y27632 attenuated the MC-3129mediated activation of PTEN, inactivation of Akt, dephosphorylation, and mitochondrial translocation of cofilin, as well as induction of apoptosis. Third, the knockdown of ROCK1 with siRNA attenuated the MC-3129-mediated activation of PTEN, inactivation of Akt, dephosphorylation, and mitochondrial translocation of cofilin, as well as induction of apoptosis. In this study, a solid tumor xenograft-like mouse model was employed to evaluate the inhibitory effects on tumor growth of MC-3129 in vivo 30,31 . Our results revealed that MC-3129 inhibited tumor growth in a U937 cell xenograft mouse through the induction of apoptosis (i.e., increased apoptosis and immunoreactivity for cleaved caspase-3). Additionally, the treatment of nude mice with MC-3129 increased the cleavage/activation of ROCK1 and the levels of phospho-PTEN, and decreased the levels of phospho-Akt in tumor sections of nude mice, further confirming the antileukemic effect of MC-3129 through interruption of the ROCK1/PTEN/Akt signaling pathway. In conclusion, the present findings demonstrated that MC-3129 exerts its selective anticancer effect by inducing cytotoxicity in different types of cancer cell lines. MC-3129 Fig. 7 The 3D structures of ten GPCRs that service as upstream effectors of RhoA. a The 3D structures of ten GPCRs were generated from RCSB Protein Data Bank (S1PR1 with ID 3VZY, AGTR1 with ID 4YAY, LPAR1 with ID 4Z35) or constructed by homology modeling (for ADRA1A, BKRB2, GPR116, GPR132, LPAR2, S1PR2, and S1PR3). b Molecular docking of these GPCRs with compounds (MC-3129, MC-3134, and MC-3135) was performed with SYBYL-X 2.0 Surflex-Dock. The results showed that LPAR2 was the most likely target because the docking scores of compounds with LPAR2 were greater than 6.0. c MC-3129 was precisely docked into the binding pocket with an obvious hydrogen bond between LAPR2 and MC-3129. d The mechanism of MC-3129 in the regulation of cofilin mitochondrial translocation was predicted as the LPAR2/RhoA/ROCK1/PTEN/PI3K pathway also exhibits its anticancer property by inducing apoptosis in hematological malignancy. Collectively, these findings suggest a hierarchy of events in MC-3129-induced apoptosis, in which RhoA/ROCK1 activation is the primary insult leading to PTEN activation and PI3K/Akt inactivation, resulting in the dephosphorylation and mitochondrial translocation of cofilin, and culminating in cytochrome c release and apoptosis induction (Fig. 7d). Since MC-3129 exhibits broad-spectrum anticancer effects with low toxicity, MC-3129 may have potential for development as a chemotherapeutic agent for the treatment of human leukemia and other cancers. Future preclinical studies should confirm the usefulness of this molecule as a clinical drug candidate for cancer treatment. Cell culture and establishment of shRNA stable cell line U937, Jurkat, HL-60, K562, SMMC-7721, and Eca109 cells were cultured in RPMI-1640 medium, while A549, DU145, and MDA-MB-231 cells were cultured in Dulbecco's modified eagle medium (DMEM); both media contained 10% fetal bovine serum (FBS) and antibiotics. All cell lines were obtained from the American Type Culture Collection (Manassas, VA) and cultured at 37°C in a humidified atmosphere and 5% CO 2 in air. The human ROCK1 shRNA (5′-CCGGGCACCAGTTGTAC CCGATTTACTCGAGTAAATCGGGTACAACTGGTG CTTTTTG-3′), RhoA shRNA (5′-CCGGCGATGTTA TACTGATGTGTTTCTCGAGAAACACATCAGTATA ACATCGTTTTTG-3′) were synthesized and subcloned into the pLKO.1 plasmid. At 48 h after the co-transfection of lentiviral packaging plasmids into 293T cells, the lentivirus-containing supernatant was collected. U937 cells were transduced with serial dilutions of lentiviral supernatant in the presence of 5 μg/ml of polybrene and selected by 5 mg/ml of puromycin. After antibiotic selection for 3 weeks, stable shRNA cells were obtained. Cell viability (MTT) assay Approximately 3000 A549, SMMC-7721, Eca109, DU145, MDA-MB-231 cells, and 30,000 U937 cells were seeded onto each well of a 96-well plate. The cells were treated as indicated for 24 h, depending on the experimental conditions. Next, 20 μl of MTT (5 mg/ml) was added per well, and the cells were incubated at 37°C for 4 h. For the adherent cell lines, the medium was discarded, and the formazan was dissolved in 150 μl of DMSO. The rate of color production was measured at 495 nm by iMark™ Microplate Absorbance Reader (Bio-Rad, Hercules, CA). For suspension cells, 100 μl of 10% SDS, 5% isobutyl alcohol, and 12 mM of HCl were directly added to the medium 32 , and the next day, the plates were read at 595 nm. The cell viabilities were normalized to the control group. The data were subjected to GraphPad Prism 5.01 software (GraphPad Software Inc., San Diego, CA), and the IC 50 values were calculated by using the respective regression equation according to the best straight-line fit obtained from linear-regression analysis and the regression lines. Apoptosis analysis The extent of apoptosis of the leukemia cells was eval- Preparation of the mitochondrial and cytosolic fractions Mitochondrial and cytosolic fractions were obtained as previously described 33 . Briefly, cell pellets were washed with cold PBS and resuspended in 5× buffer A (20 mM of HEPES, 10 mM of KCl, 1.5 mM of MgCl 2 , 1 mM of EDTA, 1 mM of EGTA, 1 mM of Na 3 VO 4 , 2 mM of leupeptin, 1 mM of PMSF, 1 mM of DTT, 2 mM of pepstatin A, and 250 mM of sucrose). For homogenization, the cells were passed through a 22-gauge needle 25 times. The homogenate was centrifuged at 4°C in three sequential steps as follows: 1000 g, 10,000 g, and 100,000 g. The 10,000-g pellet was considered as the "mitochondrial" fraction, and the 100,000 g supernatant was considered as the "cytosolic" fraction. These fractions were subjected to western blot analysis. Western blotting Total cellular samples were washed two times with icecold PBS and subsequently lysed in 4× NuPAGE LDS sample buffer (Invitrogen, NP0007) supplemented with 50 mM of dithiothreitol. Protein concentrations were determined using an Enhanced BCA Protein Assay Kit (Beyotime Biotechnology, P0011), and 30 μg of sample proteins was separated using SDS-PAGE and transferred to PVDF membranes (Bio-Rad, 162-0177). The membranes were blocked with 5% fat-free dry milk in Trisbuffered saline (TBS; 10 mM of Tris-Base, 150 mM of NaCl, pH 7.6), containing 0.1% Tween-20 (Santa Cruz Biotechnology, sc-29113) and subsequently incubated with antibodies. The protein bands were detected by incubating with horseradish peroxidase-conjugated secondary antibodies (Kirkegaard and Perry Laboratories, Gaithersburg, MD) and visualized with Clarity Western ECL Substrate (Bio-Rad, 1705061). RhoA activity assay The RhoA activity assays were performed according to the manufacturer's instructions (Cytoskeleton, Denver, CO). Briefly, 5 × 10 5 cells were plated and cultured for 2 days. The samples were then rapidly lysed at 4°C and incubated with sepharose-bound Rhotekin to pull down active RhoA. After washing, the bead/protein complexes were boiled in sample buffer and separated by SDS-PAGE. The blots were incubated with an antibody against RhoA. Site-directed mutagenesis and transfection Dephosphorylated (active, S3A) and pseudophosphorylated (inactive, S3E) human cofilin plasmids were a gift from Professor James Bamburg (Colorado State University, USA). Plasmids were transfected into U937 cells using Lipofectamine 3000 according to the manufacturer's instructions. After 48 h of transfection, the cells were exposed 10 μM MC-3129 for 24 h and subsequently subjected to immunoblotting or apoptosis analysis. Animal studies Nude mice (5 weeks old) were purchased from Vital River Laboratories (VRL, Beijing, China) and fed a standard animal diet and water. Animal studies were approved by the University Institutional Animal Care and Use Committee. The lower back of each mouse was subcutaneously inoculated with 2 × 10 6 U937 cells in serumfree RPMI-1640 medium with a Matrigel basement membrane matrix (Sigma, E1270). The mice were randomized into three groups (n = 12 for each group). Five days after tumor inoculation, the mice were treated with MC-3129 (10 mg/kg, 50 mg/kg intraperitoneally for 30 days, respectively) or an equal volume of vehicle. The tumor volumes and body weights were monitored every 5 days after treatment. The tumor volumes were determined by measuring tumor length (l) and width (w), and subsequently, the tumor volume was calculated by using V = lw 2 /2. The mice were killed after 30 days of exposure, and tumor tissues from the representative mice were fixed in paraformaldehyde, embedded in paraffin, sectioned and processed for hematoxylin and eosin (H&E) staining. The TUNEL assay was performed according to the manufacturer's instructions by using the In Situ Cell Death Detection Kit (Roche, Mannheim, Germany) to detect apoptosis in the tumor tissues. Immunohistochemistry was performed as previously described 34 . Preparation of crystal structure of GPCR proteins Since there are no available crystal structures for ADRA1A, BKRB2, GPR116, GPR132, LPAR2, S1PR2, and S1PR3, some known GPCRs crystal structures with different sequence identities were selected to generate 3D structures of these proteins by using a reported protocol 35 . Their structures were retrieved from the Protein Data Bank (http://www.pdb.org/pdb/) and prepared by SYBYL-X 2.0 (including repair of residues and minimization of energy). Homology modeling The full sequences of ADRA1A, BKRB2, GPR116, GPR132, LPAR2, S1PR2, and S1PR3 were retrieved from the UniProtKB/Swiss-Prot (http://www.uniprot.org/ uniprot). In addition to the resolution, the sequence identities generated by sequence alignments of targeted proteins and known crystal structure of GPCRs were used to select appropriate templates. According to Disulfide Bridge from UniProtKB/Swiss-Prot, the disulfide bridges/ bonds in the targeted proteins were patched. After aligning the sequences of targeted proteins with the appropriate templates, the alignment was manually adjusted according to the numbers of residues. The conserved motifs in the GPCRs (for example, "D/ERY" in TM3, "CWxPx" or "D/E6.30" in TM6 and "NPxxY" in TM7) were also applied to ensure the reasonability of the TM alignments. These models were constructed by Modeler 9.12 36 . The visualization of the generated models was performed using the PyMOL program. Molecular docking Five compounds with IC 50 values less than 50 μM were classified as active compounds, while the remaining seven compounds were classified as compounds. Surflex-Dock from SYBYL-X 2.0 was used for docking these compounds to ten GPCR models, in which the total Score was expressed in −log 10 (K d ) 37 . MOLCAD module was implemented in SYBYL-X 2.0 to explore the potential binding pocket for the GPCR models based on the following parameters: minimum dots of 1000, dot density of 6.0 points/area, and probe radius of 1.4 Å. The main protocols or the parameters set for docking included the following criteria: (1) Additional starting conformations per molecule were set to 10. (2) Max number of rotatable bonds per molecule was set to 100. (3) Maximal number poses per molecule was set to 20. (4) Density of search and number of spins per alignment was set to 9.0 and 20, respectively. (5) Pre-dock minimization, post-dock minimization, molecule fragmentation, ring flexibility, and soft grid treatment were turned on in the present work. Statistical analysis Statistical analysis was performed with SPSS 20 software (SPSS, Chicago, Illinois). The data are presented as the means ± SD. For comparison between two data sets, Student's t test was used. For the analysis of three or more sets of data, ANOVA was used. *P < 0.05 and **P < 0.01 were considered statistically significant.
2023-02-06T14:39:59.344Z
2018-05-29T00:00:00.000
{ "year": 2018, "sha1": "0f51a8adc80fe9acbcc74899ef1809a323d54877", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-018-0689-4.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "0f51a8adc80fe9acbcc74899ef1809a323d54877", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
16096002
pes2o/s2orc
v3-fos-license
Effects of concurrent administration of lead, cadmium, and arsenic in the rat. Humans are exposed to a number of toxic elements in the environment; however, most experiments with laboratory animals investigate only one toxic element. To determine if concomitant exposure to lead (Pb), cadmium (Cd), and/or arsenic (As) modified the changes produced by any one metal in various parameters of toxicity, 168 male, Sprague-Dawley, young adult rats were fed nutritionally adequate diets to which had been added 0 or 200 ppm Pb as Pb acetate, or 50 ppm Cd as Cd chloride, or 50 ppm As as sodium arsenate or arsanilic acid in a factorial design for a period of 10 weeks. At these concentrations, Cd and As reduced weight gain even when differences in food intake were taken into account; administration of both Cd and As depressed weight gain more than did either metal alone. Pb did not adversely affect food consumption or weight gain. Increased numbers of red blood cells (RBCs) were observed following administration of Pb, Cd, or As; usually more cells were observed when two or three metals were administered, compared to individual metals. Despite increasing numbers of circulating RBCs, hemoglobin and hematocrit were reduced, especially with the Pb-Cd combination and the Cd-arsanilic acid combination. Specific effects of Pb on heme synthesis were observed, including increased urinary excretion of delta-aminolevulinic acid; this increase was reduced by the presence of dietary cadmium. Analyses of blood showed values for the laboratory rat within normal ranges for blood urea nitrogen, creatinine, cholesterol, calcium, albumin, total protein, and bilirubin. Uric acid was increased by Pb, with little modification by dietary Cd or As content. Serum glutamate-oxalate transaminase activity was reduced by As. Serum alkaline phosphatase was greatly reduced by either As or Cd but not Pb. Combinations of As and Cd did not further reduce the activity of this enzyme. Kidney weight and kidney weight/body weight ratios were increased by Pb alone, with no effects of Cd or As alone or as interactions. Liver weight/body weight ratios were reduced in animals fed 50 ppm dietary Cd. Kidney histology shows predominantly Pb effects, namely, intranuclear inclusion bodies and cloudy swelling. Ultrastructural evaluation of kidneys from Pb-treated animals disclosed nuclear inclusion bodies of the usual morphology and mitochondrial swelling. Concurrent administration of Cd greatly minimized Pb effects on the kidney under conditions of this experiment. Liver histology suggests an increased rate of cell turnover with either As compound, but few specific changes. Introduction Human populations are seldom exposed to only one toxic element in the environment. While a great deal of research utilizing experimental animals has been carried out to study the effects of metals, the great majority of this work has involved administration of one toxic metal. The current experiment was undertaken to determine if concurrent administration of lead, cadmium, and arsenic, changed the severity or type of effect produced by the individual metals, i.e., if interactive effects occurred. Changes in the renal, hematopoietic, and hepatic systems were of special interest because Pb (1), Cd (2), and As (3) each have effects on these systems. For example, Pb, Cd, and As each affect specific steps in heme and porphyrin synthesis or metabolism which may not be rate-limiting under conditions of exposure to individual metals. However, their combined effect might produce anemia as shown by decreased hemoglobin concentration or hematocrit. August 1977 165 Materials and Methods One hundred and sixty-eight, male, albino, Sprague-Dawley, young, adult rats were fed nutritionally adequate, casein-based purified diets (4) for a period of 10 weeks. There were 14 animals per group. The diets contained background or high levels of Cd, Pb, or As and were arranged in a 2 x 2 x 2 factorial design ( Table 1). The background levels were less than 20 ppb for Cd and As and less than 50 ppb for Pb. Lead was added at the level of 200 ppm in the form of Pb acetate. Cd was added as CdCl2 at 50 ppm Cd and As as sodium arsenate (Inorg As) in one set of diets and as arsanilic acid (Org As) in another set of diets. Analyses of the diets showed that the actual metal content was within 10% of these calculated values. The purpose of testing these two forms of As was to determine if differences in tissue concentrations or toxicity due to chemical form of As would occur at the end of the 10-week period. Tissue concentrations of the metals are not yet available and will be reported later. Concentrations of these metals in brain, bone, liver, and kidney will be measured by plasma emission spectroscopy which will determine concentrations of approximately 20 trace elements. This type of analysis will provide considerable information on interactions between various essential and toxic metals and may provide insight on the mechanisms of toxicity. Some of the parameters measured in this study were initial body weight, final body weight, food consumption, blood pressure (systolic), hemoglobin, hematocrit, and red blood count, porphyrin intermediates, clinical chemistries (blood), kidney and liver weights, light and electron microscopy of liver and kidney, blood lead concentration, blood, brain, kidney, and liver concentrations of As, Cd, and Pb. Generally, they are standard clinical measurements used in assessment of toxicity. Hemoglobin was measured by a cyanohemoglobin technique (5), and blood urea nitrogen, creatinine, cholesterol, calcium, albumin, total protein, bilirubin, uric acid, alkaline phosphatase and glutamate-oxalate transaminase (SGOT) by the respective methods cited (6)(7)(8)(9)(10)(11)(12)(13)(14)(15). Urinary 8-aminolevulinic acid (dALA) was measured by the technique reported by Davis and Andelman (16). Fixation and processing of liver and kidney tissue for histological and ultrastructural examination were conducted by previously described methods and instrumentation (17). Choice of the rat as the experimental animal in this study was based on its usefulness for investigations of the effects of Pb, Cd, and As. The rat will readily consume a purified diet in which the concentrations of the metal can be closely controlled and the nutritional requirements of the rat are well known. These were considered to be important factors in an experiment of this type in which metal interactions are of interest. Although the rat may have an unusual distribution of tissue arsenic (3) this difference was considered of smaller importance in a longer term toxicity study than in short term metabolic experiments. Concentrations of Pb (4), Cd (17), and As (18,19) to be incorporated into the diet were based on information from previous investigations. Concentrations were selected which would produce slight to moderate toxicity, that is, tissue accumulation of the metal with demonstrable morphologic and biochemical changes. The data reported are the results of statistical evaluation using the analysis of variance technique. Levels of significance are shown in each table. In the tables the p value for a main effect or an interaction is reported beside the treatment. This does not mean that the number beside the significance level differs from control but that the overall effect is significant at the magnitude specified in the footnote. Results The quantity of food ingested was not significantly affected by the level of Pb or Cd in the diet ( Table 2), but was reduced by addition of 50 ppm of either form of As to the diet. Weight gain was reduced more by both As and Cd than by Pb. Because the weight reduction observed in animals fed the As diets could be due to reduced food intake, utilization of feed or efficiency of utilization of feed was calculated. Food efficiency utilization was reduced when either Cd or As were fed at the concentrations used in these diets ( Table 2). The combined effects of Cd and As were to reduce the food utilization even more than that which occurred with either metal alone. Administration of Pb, Cd, or As resulted in an increase in the number of circulating red blood cells ( Table 3). Combinations of metals usually produced Environmental Health Perspectives an increase in the number of circulating red blood cells above that produced by either metal alone. Pb or Cd alone did not result in significant reductions in hemoglobin or hematocrit, however, inorganic As alone reduced both hemoglobin and hematocrit. The greatest decreases in hemoglobin and hematocrit were seen with the combination of Pb and Cd and with Cd and organic As ( Table 3). The number of peripheral white blood cells was decreased by Cd but not significantly affected by Pb and As. Pb effects on heme synthesis can be seen by measuring the urinary excretion of dALA. Urinary dALA excretion was greatly increased by Pb, however, the magnitude of this increase was decreased by the presence of Cd (Table 4). Similar effects of Pb and Cd on blood Pb concentration can be observed (Table 4). Tissue Pb concentrations (kidney, liver, and bone) are needed to confirm the probable reduction of body burden of Pb in the presence of dietary Cd. Analysis of blood showed values within the normal range for the laboratory rat for: blood urea nitrogen, creatinine, cholesterol, calcium, albumin, total protein and bilirubin. Serum uric acid concentration was increased by Pb (Table 5). Serum alkaline phosphatase activity was decreased by both Cd and As, but was not affected by Pb ( Table 5). The combination of Cd and As resulted in even greater reductions of alkaline phosphatase activity than that resulting from either metal alone. Alkaline phosphatase activity is derived from a number of different isoenzymes. The specific isoenzyme(s) affected by Cd or As were not determined in the current study. SGOT activity was greatly reduced by administration of As alone. Both kidney weight and the kidney weight to body weight ratio were increased by elevated levels of Pb in the diet (Table 6). Cd and As were without influence on these parameters ( Table 6). The liver weight/body weight ratio was decreased by dietary Cd but not Pb or As. The effects of these metals were also evaluated by light and electron microscopy. Liver sections examined by light microscopy from animals on the lead and cadmium treatment specimens were indistinguishable from controls except that some mild parenchymal cell swelling was noted in animals exposed to either form of arsenic. Renal changes characterized by cloudy swelling of proximal tubule cells and intranuclear inclusion bodies (Fig. 1) were observed in animals given diets containing lead with the notable exception of those animals concomitantly exposed to cadmium (Table 7). These animals had few intranuclear inclusions and relatively little cloudy swelling. Morphological changes in kidneys of animals exposed to combinations of cadmium or either arsenical were relatively slight. Discussion The purpose of this study was to evaluate the combined biological effects of Pb, Cd, and As in the rat. Significant interactions between all three metals were relatively infrequent. The most consistent interactions were between Pb and Cd, between Cd and As. Generally, the presence of other metals reduced the magnitude of the Pb effect. Reduction in efficiency of conversion of food energy into body weight gain was the most sensitive of the parameters measured in this study for detecting interactions of two or three metals. Reduced efficiency of food conversion may be an indication of impaired absorption of nutrients from the gastrointestinal tract or of a metabolic defect at the cellular level. Measurements of cellular utilization of carbohydrates in relation to oxidative phosphorylation and genera-Environmental Health Perspectives aNumber of rats with inclusions over number examined. tion of high energy phosphate compounds would be of interest. Pb (1), As (3), and Cd (2) are all known to interfere with these processes. From the morphological and biochemical data currently available it appears that there may be lower Pb absorption or retention at the higher level of Cd intake. This can be confirmed only when analyses of tissue Pb concentrations are completed. However, if Cd reduces Pb absorption, it is of interest to consider further a possible mechanism for this change. Recent evidence indicates that vitamin D increases gastrointestinal absorption of Pb when added to the diet of rats deficient in vitamin D (Dr. Hector DeLuca, Department of Biochemistry, University of Wisconsin, FDA contract report). Feldman and Cousins (20) report that incubation of Cd with kidney homogenates and isolated mitochondria from vitamin D deficient chicks decreased formation of 1,25-dihydroxycholecalciferol from 25-hydroxycholecalciferol. Chicks fed diets containing 50 ppm Cd showed depressed production of 1,25-dihydroxycholecalciferol in kidney mitochondria. Another mechanism which may reduce Pb absorption at high levels of dietary Cd is damage to the absorptive surface of the gastrointestinal tract. Richardson and Fox (21) report gross, microscopic, and ultrastructural lesions in the proximal small intestine of Japanese quail fed a diet containing 75 mg Cd/kg. The villi were short and thick; the lamina propria had a dense cellular infiltrate; and microvilli on absorptive villi were markedly shortened. Such changes are similar to lesions occurring in some malabsorption syndromes in humans. The possible mechanisms through which Cd may reduce body burden of Pb are of interest with regard to establishment of safe levels of exposure to toxic compounds in humans and animals. Reduction of body burden of a toxic substance, in this example Pb, may be achieved through a mechanism which is generally deleterious to the health of the organism. Reduction of ability to absorb the toxic compound Pb would be accompanied by reduced ability to absorb nutritionally required elements as well. Many of the biochemical and hematological parameters reported here have been investigated in epidemiological or clinical studies of human populations. These parameters, however, give only a clinical endpoint picture of a complex metabolic situation and further studies are in process to elucidate the biochemical mechanisms for these phenomena in experimental animals. Rats fed higher levels of dietary Pb showed significantly increased serum uric acid concentrations. Concentrations of urate above 3 mg/dl are unusual in the rat. Whether this is a reflection of impaired renal excretion of urate by the kidney or impaired metabolism of urate by uric acid oxidase in the liver or kidney is not clear. Gouty changes have been described in humans having elevated body burdens of Pb (1). The specific isoenzyme(s) involved in the observed reduction of alkaline phosphatase activity by Cd and As in this study are not known and will be examined in future investigations. Varying degrees of inhibition of different isoenzymes by a metal have been observed, Kshirsagar (22) reported reduced kidney, liver and intestinal alkaline phosphatase activity, but increased bone alkaline phosphatase activity in rats fed diets containing 2% stable strontium. Reduction in SGOT activity was observed in animals fed high levels of As. Administration of other metals, for example, mercury (23)-have resulted in an increase in SGOT, which generally reflects tissue damage. Reduction of activity by dietary As may indicate inhibition of the enzyme by As and further studies are needed to examine this possibility. The metabolic complexities of multi-element exposure are also illustrated by hematological findings in this study. All animals maintained fairly normal hemoglobin concentrations even though excretion of intermediates in porphyrin and heme synthesis increased several fold. There is a substantial reserve capacity for formation of hemoglobin which is reflected in maintenance of effectively normal hemoglobin concentrations despite two to threefold increases in excretion of intermediate products. Certainly not all heme synthesized goes to the formation of hemoglobin. Maintenance of this heme function, however, may have priority over some other uses for heme such as heme-containing enzymes. It may also be that the tissue or organ systems synthesizing hemoglobin, such as bone marrow, concentrate lower levels of the metals than cells synthesizing heme for other compounds. Ab-Environmental Health Perspectives 170 sence of an effect in an end product such as hemoglobin does not constitute an absence of metabolic effect. Biochemical studies reported at this meeting (24-26) examine these other effects of As more thoroughly. This experiment probably has its greatest usefulness in suggesting directions for further research and the need for more specific parameters to assess toxicity. Careful control of nutritional conditions in studies of this type is clearly essential to interpretation of toxicity data.
2014-10-01T00:00:00.000Z
1977-08-01T00:00:00.000
{ "year": 1977, "sha1": "4e6568a0589932c27f2c9839b636d20344e105ad", "oa_license": "pd", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1637428/pdf/envhper00485-0158.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e6568a0589932c27f2c9839b636d20344e105ad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
251425795
pes2o/s2orc
v3-fos-license
Evaluation of Bicanalicular Nasolacrimal Duct Intubation as an Adjunct in Surgical Ectropion Correction Background and Objectives: We aimed to analyze and compare the outcomes of conventional ectropion surgery procedures with and without concurrent bicanalicular nasolacrimal duct intubation to identify if the combination of procedures could serve as a novel surgical approach to treat lower eyelid ectropion. Materials and Methods: A retrospective review of all patients who underwent surgical correction for lower eyelid ectropion at the Cantonal Hospital of Aarau between January 2019 and December 2020 was performed. Patient medical records were examined for etiology, surgical correction technique and intra- and postoperative complications. The postoperative punctal position, the pre- and postoperative epiphora and reoperation rate were also documented. Two study groups consisting of cases with isolated and combined procedures were compared, with respect to postoperative punctual and lower lid position. Results: A total of 53 lower eyelids (35 patients) were included in this study. Six months postoperatively, the correct punctum position (p = 0.1188) and improvement of epiphora (p = 0.7739) did not significantly differ between the two groups. More complications were seen in the nasolacrimal duct intubation group (p = 0.0041), which consisted of cheese wiring and one tube dislocation. Conclusion: In our study, bicanalicular nasolacrimal intubation during ectropion surgery does not seem to improve the outcome of ectropion surgery and is, therefore, not recommended on a routine basis. Introduction Lower eyelid ectropion is one of the most frequent eyelid malpositions [1]. It is characterized by the eversion of the lower lid margin, leading from mild epiphora and dry eye disease to severe exposure keratopathy [1,2]. The etiology of lower eyelid ectropion varies; the most common etiologies are age-related involutional changes (dehiscence of the anterior and posterior lamellas and degeneration of the orbicularis muscle), leading to increased laxity of the canthi [1]. Other causes, such as congenital, paralytic, cicatricial and mechanical ectropion, are less frequent. Several surgical procedures for ectropion correction have been described, for example, lateral tarsal strip, lateral canthoplasty/canthopexy, lid shortening through a full-thickness wedge excision, combined with or without medial spindle/spiraling suture to address vertical eyelid laxity and correct punctal eversion [1][2][3][4][5][6][7][8]. Larger surgical interventions involving skin grafting or scar correction are sometimes needed for cicatricial and mechanical ectropions [5]. Favorable safety profiles and high success rates are inherent to all the above-established surgical techniques; however, ectropion occasionally reoccurs after all types of ectropion correction surgery. Mono-or bicanalicular nasolacrimal duct intubation is conventionally used to address congenital or acquired nasolacrimal duct obstructions or for traumatic nasolacrimal duct lacerations [9][10][11]. In the latter, the tube serves to readapt the lacrimal duct and prevent its scarring, yet it may also function to stabilize the lacerated lid in the correct position [12]. A case report study conducted by Higaki et al. published in 2013 described nasolacrimal duct obstruction treated with canaliculoplasty and nasolacrimal duct intubation, leading to a resolution of a concurrently present medial lower eyelid ectropion [12]. The authors suggested that the constant forces exhibited onto the lower eyelid, due to the intubation stabilizing the lower eyelid in its correct position. However, they did not examine a cohort of patients. Inspired from their suggestion, we hypothesized that nasolacrimal duct intubation might be useful for stabilizing the lower eyelid also in cases of surgical ectropion repair as well. Since epiphora is often caused by a combination of partial nasolacrimal duct obstructions and lower eyelid laxity nasolacrimal duct intubation is often combined with conventional surgical procedures for ectropion correction. In our study, we report our experience with nasolacrimal duct intubation as an adjunct to conventional lid shortening or punctal inversion procedures for lower eyelid ectropion correction. We aimed to analyze and compare the postoperative lower punctal positioning after ectropion surgery procedures with or without bicanalicular intubation. To our knowledge, nasolacrimal duct intubation has not been investigated as an adjunct to conventional ectropion correction procedures in any other cohort studies before. Study Design and Ethics This study was a retrospective case series. It was conducted in accordance with the Declaration of Helsinki and approved by the competent ethics committee (Ethikkommission Nordwest-und Zentralschweiz EKNZ, approval identification number 2021-00011). Surgical Procedure All patients underwent lower eyelid ectropion correction surgery. The surgical procedures used in our cohort study either aimed to correct the horizontal laxity or the horizontal and the vertical laxity of causing the ectropion. The surgical approaches, aiming to restore the horizontal laxity, used were lateral tarsal strip, lateral canthopexy, partial lateral tarsorrhaphy and KZ-Procedure, while the ones aiming to restore horizontal and the vertical laxity were Lazy-T and any surgery combined with a diamond-shaped medial spindle/spiraling suture for punctal inversion. In one cicatricial ectropion case, in which skin grafting was needed, a Tripier's flap was also performed. For nasolacrimal duct intubation, a commercially available bicanalicular silicone selfpushed tube was used (Nunchaku©, FCI, Pembroke, MA, USA) ( Figure 1). Both puncta were widened intraoperatively using a punctum dilator. Then, the canaliculi were probed and the lacrimal system was irrigated with 0.9% sodium chloride solution. Finally, the ends of the Nunchaku tube were inserted via the lower and the upper punctum, and the nasolacrimal duct was intubated in its entire length. All patients were examined within the first three postoperative days to ensure the correct tube positioning. The tube was then left in place for approximately three months and removed during a postoperative outpatient follow-up consultation. The final follow-up examination was conducted approximately six months postoperatively, at which point clinical outcomes were recorded. Subjects and Retrospective Patients' Records Reviewing All theater lists at our institution were manually screened for cases of ectropion repair performed between January 2019 and December 2020. Patient medical records (preand postoperative photographs, surgical logs, consultation notes) were investigated with respect to baseline patient characteristics, coexisting partial lacrimal obstruction, the etiology of the ectropion, the localization of the lower eyelid laxity, surgical techniques, intra-and postoperative complications, the postoperative punctal position, epiphora and reoperation rate. Epiphora was graded according to the Munk Scale and dry eye disease according to the Oxford grading scheme [13,14]. The ectropion cases were divided into two groups: Group 1 included cases with ectropion surgery without nasolacrimal duct intubation, and group 2 included cases with ectropion surgery with bicanalicular nasolacrimal duct intubation. For the bicanalicular nasolacrimal duct intubation, patients were chosen with concurrent partial lacrimal duct obstructions. Data Analysis Statistical analysis was performed with the Prism 5.0c (GraphPad Software, La Jolla, CA, USA). Descriptive statistics included medians and ranges. Nonparametric tests were performed for all values not normally distributed. The Mann-Whitney test or the Fisher's exact test in case of dichotomous datasets were performed to compare the two groups. The level of significance was set at a p value of <0.05. Subjects and Retrospective Patients' Records Reviewing All theater lists at our institution were manually screened for cases of ectropion repair performed between January 2019 and December 2020. Patient medical records (pre-and postoperative photographs, surgical logs, consultation notes) were investigated with respect to baseline patient characteristics, coexisting partial lacrimal obstruction, the etiology of the ectropion, the localization of the lower eyelid laxity, surgical techniques, intra-and postoperative complications, the postoperative punctal position, epiphora and reoperation rate. Epiphora was graded according to the Munk Scale and dry eye disease according to the Oxford grading scheme [13,14]. The ectropion cases were divided into two groups: Group 1 included cases with ectropion surgery without nasolacrimal duct intubation, and group 2 included cases with ectropion surgery with bicanalicular nasolacrimal duct intubation. For the bicanalicular nasolacrimal duct intubation, patients were chosen with concurrent partial lacrimal duct obstructions. Data Analysis Statistical analysis was performed with the Prism 5.0c (GraphPad Software, La Jolla, CA, USA). Descriptive statistics included medians and ranges. Nonparametric tests were performed for all values not normally distributed. The Mann-Whitney test or the Fisher's exact test in case of dichotomous datasets were performed to compare the two groups. The level of significance was set at a p value of <0.05. Results In total, 35 patients (53 lower eyelids) were included in this study. Group 1 comprised 18 patients (27 lower eyelids) who underwent isolated lower eyelid ectropion, and group 2 consisted of 17 patients (26 lower eyelids) who underwent lower eyelid ectropion. Patients in group 2 had a small degree of nasolacrimal stenosis. Patient characteristics and the results of the statistical analysis are shown in Table 1. Table 1. Patients' characteristics, pre-and postoperative findings. Epiphora was assessed with the Munk Scale (0: No epiphora; 1: Epiphora requiring dabbing less than twice per day; 2: Epiphora requiring dabbing 2-4 times per day; 3: Epiphora requiring dabbing 5-10 times per day; 4: Epiphora requiring dabbing more than 10 times per day; 5: Constant epiphora). Dry eye severity was assessed according to the Oxford grading scheme. Preoperatively, patients in group 2 had a correct position of the punctum (19.2%) less frequently than patients in group 1 (44.4%). Six months postoperatively, both group 2 (65.4%) and group 1 (85.2%) usually had a correct punctual position. The improvement rate did not significantly differ (p = 0.4837) between group 1 (71.4%) and group 2 (57.1%). The ectropion recurrence rate was low in both groups. In group 1, one eyelid had to be reoperated at two months postoperatively due to suture breakage. In group 2, two patients had to undergo a revision surgery due to suture breakage in one patient and ectropion recurrence in the other patient. The latter patient had a cicatricial ectropion, known for its higher recurrence rate. Other postoperative complications related to the bicanalicular nasolacrimal duct intubation included a linear dilatation of the tear duct punctum in the sense of cheese wiring and one dislocation of the tube. The total number of postoperative complications was significantly higher in group 2 than group 1 (p = 0.0041) due to the relatively high rate of cheese wiring. No intraoperative complications were described. The median preoperative epiphora according to the Munk Scale did not significantly differ between the two groups (p = 0.4466). The postoperative Munk Scale improved similarly in both groups (p = 0.7739). The rate of preoperative dry eye disease was higher in group 2 than group 1 (p = 0.0108), and this difference appeared to be even bigger postoperatively (p < 0.0001). This suggests that the bicanalicular nasolacrimal duct intubation might have contributed to a better tear drainage. Since the main outcome of our study was the stabilization of the lower eyelid expressed through the punctual positioning, we examined the postoperative outcomes according to type of the surgical approach used. In group 2, the quota between surgical procedures aiming to correct only horizontal laxity versus horizontal and vertical laxity was 13/26 (50%). The equivalent quota in group 1 was 3/27 (11.1%). This difference was statistically significant (p = 0.028). The percentage of eyelids with improved postoperative punctum position was 44.4% (group 2) or 63.6% (group 1) after a surgical procedure addressing only the horizontal laxity (p = 0.6534) and 66.7% (group 2) or 100% (group 1) after a surgical procedure addressing horizontal and vertical laxity (p = 0.5165). Discussion To our knowledge, this cohort study is the first attempt to investigate the benefits and hazards of bicanalicular nasolacrimal intubation as an adjunct to ectropion surgery. In our study, we could not verify the hypothesis that the bicanalicular nasolacrimal duct intubation may lead to better punctum positioning or better stabilization of the lower eyelid. Furthermore, the bicanalicular nasolacrimal duct intubation leads to more postoperative complications. The only clear benefit of the bicanalicular nasolacrimal duct intubation observed in our study was a reduction in epiphora. This is, however, expected as the tube dilated and reopened the partially stenosized lacrimal duct. We acknowledge some limitations of our study. First, the size of the groups were relatively small. Second, the groups were not completely homogenous in terms of patient age and the surgical approaches used. However, the overall result of all surgical techniques used in our hospital is the horizontal eyelid shortening and, therefore, the technique to achieve this should not make a clinically relevant difference. Another parameter that differentiates the groups from each other is that group 2 patients had two reasons for epiphora: nasolacrimal duct stenosis and ectropion, while the group 1 patients had a patent lacrimal system. Nevertheless, this difference should not influence the main outcome of our study, which was the postoperative stabilization of the lower eyelid expressed through the punctual positioning. In conclusion, the results of our study suggest that the default nasolacrimal intubation is not recommended in ectropion surgery. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Ethikkommission Nordwest-und Zentralschweiz EKNZ, approval identification number 2021-00011, approved 24.02.2021. Informed Consent Statement: Patient consent was waived due to the retrospective, anonymous analysis. Data Availability Statement: Data supporting the findings of this study are available within the article. Conflicts of Interest: The authors declare no conflict of interest.
2022-08-09T15:21:42.504Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "3391c02aacedc9970b54a731170d5bfb19d8e478", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/58/8/1051/pdf?version=1659603656", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3288bd1488a702b9b8eaaaf34d10299f413b500f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
236403290
pes2o/s2orc
v3-fos-license
How Might Cognitive Factors Affect Iranian EFL Learners’ Response to Feedback Provided on Writing? An Individual Differences Perspective The researchers exploring the effectiveness of feedback have normally contrasted groups of learners receiving different types of feedback treatments. However, since there are always individual responses to any pedagogical treatment within a group of students and the effects of feedback can vary significantly even in participants receiving the same kind of feedback in the same experiment, the present study used a qualitative case study approach and techniques such as narrative construction and qualitative comparative analysis to see how the individuals with different cognitive characteristics (namely, language learning aptitude and working memory) respond to various types of feedback (namely, direct feedback, indirect feedback with error codes and metalinguistic feedback with explanations) provided on linguistic aspects of their writings and how these characteristics might impact their learning from the feedback. The comparison of the students’ responses to the feedback provided indicated that different individuals respond to and benefit from the learning potentials of different types of corrective feedback in different and their own unique ways. In fact, the learners having higher levels of aptitude and working memory were better able to resolve their problems and improve their writing as a result of the feedback received. On the whole, the findings of the present study confirm the important role of considering learners’ individual characteristics in any pedagogical intervention. Introduction 1 Corrective feedback (CF), defined as evaluative information and judgment provided on the students' linguistic performance, is widely acknowledged to benefit the learners and enhance the quality of their learning (Larsen Freeman, 2003). The body of research conducted in this tradition had focused upon the purposes, processes and effects of feedback and its various features like "degrees of explicitness (direct vs. indirect), timing (immediate vs. delayed), the manner of delivery (e.g., handwritten vs. delivered using technology), the source (self, the teacher, or peers), and even the visual presentation (i.e., the color of feedback)" (Elwood & Bode, 2014, p. 334). A point worth-mentioning is that corrective feedback has not been a controversy-free issue and its effectiveness has been challenged by scholars such as Truscott (1996Truscott ( , 1999Truscott ( , & 2009) who claimed that error correction is harmful and should be abandoned since it does not always fulfill its potential. On the other hand, cognitive theorists believe that corrective feedback must always accompany instruction since it plays an important role in facilitating the students' engagement and acquisition (Sheen & Ellis, 2011). In the same vein, most approaches to second language writing pedagogy have specified a primary role for feedback practice and writing instructors in many education institutions around the world have equipped themselves with the knowledge of effective feedback strategies and offer this valuable asset to their learners with the intention of pointing out their errors and resolving their problems while engaging in the act of writing. However, research undertaken regarding the role of feedback in L2 writing classrooms has referred to the fact that "there are no simple [and conclusive] answers to questions such as which activities merit feedback, how and when to give feedback and what the benefits of giving feedback are" (Long & Richards, 2006, p. xiii). There are also conflicting/indecisive findings in some areas of feedback such as feedback focus, extent and strategy (Ashwell, 2000;Bitchener, 2008;Chandler, 2003;Ellis, 2013;Ferris & Roberts, 2001;Guénette, 2007;Li, 2010). Consideration of different learning contexts (e.g., the consideration of the learning atmosphere and instructional methodology variables) and the characteristics and capabilities of different learners (e.g., students' prior educational and language backgrounds, learning style, values and beliefs, motivation, attitude and future goals, and other additional factors) can provide a way out of this controversy (Evans, Hartshorn, McCollum, & Wolfersberger, 2010;Guénette, 2007;Saadat, Mehrpour, & Khajavi, 2017) since it is generally believed that different individuals respond to and benefit from the learning potentials of different types of corrective feedback in different and their own unique ways (Kormos, 2012). Ferris, Liu, Sinha, and Senna (2013) also maintain that with few exceptions, the researchers exploring the effectiveness of feedback have contrasted groups of learners receiving different types of feedback interventions. However, since there are always individual responses to any pedagogical treatment within a group of students and the effects of feedback can vary significantly even in participants receiving the same kind of feedback in the same experiment (Ferris, et al., 2013; Santos, Lopez-serrano, & Manchon, 2010), more investigation must be directed towards examining how learners with different individual differences profiles benefit form and respond to different types of feedback. Ellis (2010) maintains that "The vast bulk of CF studies have ignored learner factors, focusing instead on the relationship and the effect of specific CF strategies and learning outcomes" (p. 339). Ferris (2010) also emphasizes the important role of personal attributes and individual differences in L2 learners' response to corrective feedback by suggesting that "some students benefit more from CF than others, for a variety of reasons such as motivation, learning style, and metalinguistic background knowledge" (p. 197). Storch and Wigglesworth (2010) have further asserted that neglected individual differences such as learners' linguistic backgrounds and affective factors such as their beliefs and attitudes, their levels of motivation and cognitive competencies can influence the outcome of any writing interventions and the learners' uptake and retention of the feedback received to a great extent. Review of the Related Literature As for the influence of cognitive individual differences on writing, it is hypothesized that learners with high metalinguistic awareness and good deductive skill (i.e., a high aptitude) and those who enjoy from a satisfactory level of working memory capacity are more readily able to adequately devote their attentional resources to different aspects of writing and can more effectively use their metalinguistic knowledge in consciously monitoring the linguistic accuracy of their written texts, noticing and identifying the gaps (and errors) in their grammatical knowledge and engaging more actively in problem-solving process to overcome the linguistic and organizational problems encountered during the writing process (Kormos, 2012). However, few studies have investigated the effect of cognitive and affective factors on the processing of feedback. In one of these studies, Sheen (2007) explored the relationship between language learning aptitude and written corrective feedback. More specifically, the researcher investigated the relationship between language analytic ability, as one of the main component of language learning aptitude construct, and uptake from direct correction with or without metalinguistic feedback. The findings of study indicated that learners having a high level of aptitude (in this case high language analytic ability) benefited more from feedback under both conditions and they were more advantaged when metalinguistic feedback was offered. It had also been suggested that learners with high level of language aptitude were more readily able to learn and consolidate their L2 knowledge through feedback. In this line of research, Shintani and Ellis (2013) studied how language analytic ability (conceptualized as one main component of language learning aptitude) can influence learners' processing of different written corrective feedback and metalinguistic explanation. The findings of the study pointed to the fact that language analytic ability was effective in the processing of both metalinguistic and direct feedback; however, this influence was dependent upon some considerations such as "type of feedback, whether learners are asked to revise, and the nature of grammatical target" (p. 118). It is also hypothesized that L2 writers with different working memory (WM) capacities might process and benefit from the learning potentials of different types of feedback in different ways. In fact, it is maintained that WM may shape, explain, and predict the way that learners respond to corrective feedback (e.g., Mackey, et al., 2010;Trofimovich, Ammar, & Gatbonton, 2007). For example, Payne and Whitney (2002) have found that learners with high working memory capacity benefit more from feedback in face-to-face interaction and produce more modified output, whereas those with low working memory capacity benefit more from feedback delivered via computer-mediated communication. Some case studies have been reported in the literature which have used think-aloud and/or retrospective interviews to closely examine individual student writers' responses to written corrective feedback (e.g., Ferris, et al., 2013;Hyland, 2011;Storch & Wigglesworth, 2010). For example, Ferris, et al., (2013) conducted a multiple-case longitudinal study in which ten ESL university students wrote four in-class texts, revised them after receiving written corrective feedback, and participated in retrospective interviews after each of the first three writing and revision sessions. More specifically, they tried to analyze the students' selfmonitoring processes while writing and revising their texts and the individual and contextual factors that might affect their writing development. They found that the students considered the applied techniques (focused WCF, revision, and one-to-one discussion about errors) highly beneficial and suggested that "teachers should take a more finely tuned approach to corrective feedback and that future research designs investigating WCF should go beyond consideration of only students' written products" (p. 307). Moreover, Rahimi (2015) examined the extent to which L2 learners' field dependency and writing motivation (as two main individual difference variables) could influence their learning from and retention of a teacher's written corrective feedback in the short and long run. The results indicated that there was a strong relationship between field independence style and the students' successful short-term and long-term retention of corrections in the subsequent writings; however, writing motivation could only influence and facilitate the short-term retention of corrective feedback. Mallahi (2019) explored how learners with different levels of self-efficacy in writing responded to various types of corrective feedback provided on the linguistics aspects of their written texts. The results of a qualitative comparative analysis of the learners' performances confirmed that each individual can benefit from the learning potential of corrective feedback in different and unique ways. Han and Hyland (2019) explored two students' emotional reactions to teacher WCF and found that these students experienced different discrete emotions with varying object foci, valence, and activation, and these emotions fluctuated in various stages of writing especially in revision process. The researchers further recommended that academic writing teachers "should reflect on the appropriateness of their WCF strategies in local contexts, invite students to express and reflect on their WCF-evoked emotions, and increase students' awareness of the value of academic emotions" (p. 29). Furthermore, Zheng, Yu and Liu (2020) intended to understand the pattens of low-proficiency students' engagement with teacher corrective feedback in writing by conducting an in-depth case study on two Chinese LP students. After analyzing teacher WCF, students' written drafts, their immediate oral reports and retrospective interviews, the researchers found that "their engagement was distinctively different in terms that one's engagement was relatively extensive, especially in the affective aspect, but the other's engagement was at a relatively limited level, characterized by negative emotions and scant cognitive engagement" (p. 1). Han and Hyland (2015) also highlighted the role of IDs in learners' multi-dimensional aspects of cognitive, behavioral, and affective engagement with WCF. In the same vein, Han (2018) investigated how learner and contextual factors can influence individual learners' engagement with WCF in L2 classrooms. By adopting an ecological heuristic perspective to conduct the study and after analyzing the collected data from multiple sources such as students' writing, verbal reports, interviews, field notes, and class documents, he maintained that individual learners' engagement with WCF can be understood "as a process of perceiving and acting upon embedded learning opportunities afforded by WCF, and highlight the importance of establishing an alignment between affordances and learner agency to enhance individual students' engagement with written feedback (p. 1). As the investigation of the literature revealed how the individual differences variables influence the use of feedback is still an under-researched area and few studies have explored the nature and effects of these individual differences (especially aptitude and working memory) within the subfield of writing. Saadat, et al., (2017) also argued that "research on the interaction between individual differences and writing feedback is still at its infancy and there is still much to be learned in this area" (p. 86). Benson and DeKeyser (2018) maintain that "carefully controlled experiments on written CF that systematically explore various individual difference factors are relatively few" (p. 4). Consequently, due to the importance of individual difference variables in how the individuals respond to different types of feedback, more research is needed to explore the relationship between individual difference variables and specific corrective feedback strategies targeting specific language features within more authentic classroom practices. In the same regard, the present study intends to answer the following research questions: 1. How do individuals with different cognitive profiles (that is, aptitude and working memory) respond to and learn from different types of feedback (namely, direct feedback, indirect feedback with notations and metalinguistic feedback) provided on linguistic aspects of their writing (that is, structure, vocabulary and cohesion)? 2. What kind of feedback strategy do the students prefer more and why? Method of the Study The present endeavor used a case study approach and attempted to experimentally explore how learners with different individual differences profiles (that is, different levels of aptitude and working memory) respond to and learn from different kinds of feedback (namely, direct feedback, indirect feedback with notations and feedback with metalinguistic explanations) provided on linguistic aspects of writing. The 'case study' is the study of the "particularity and complexity of a single case" (Stake, I995, p. xi). Dörnyei (2007) further asserts that "case study is not a specific technique but rather a method of collecting and organizing data so as to maximize our understanding of the unitary character of the social being or object studied" (p. 152). Generally, a narrative was constructed for each individual and a kind of qualitative case-by-case comparison (i.e., Qualitative Comparative Analysis) was adopted in this phase of the study, which is further elaborated upon below. Though case studies have been rare in research on corrective feedback in writing, several researchers have utilized this methodology effectively for studies that looked more broadly at teacher commentary and its impact on subsequent student writing (e.g., Goldstein & Conrad, 1990; Patthey-Chavez & Ferris, 1997), studies which have examined various aspects of L2 writers' responses to teachers' written feedback (e.g., Hyland, 1998;Hyland & Hyland, 2001, 2006, studies which have examined variation in case study participants' willingness and ability to revise their writing after receiving a teacher's written commentary (e.g., Conrad & Goldstein, 1999) and studies which have investigated the extent to which L2 learners' individual differences (e.g., formal knowledge of language, field dependency and writing motivation) influence their monitoring behavior during the writing process and predict their retention of a teacher's written corrective feedback in the short and long run (e.g., Ferris, et al., 2013;Rahimi, 2015). These case studies on teachers' corrective feedback and learners' response have provided some insights which inform and influence the design and procedures of the current study: (1) Teachers must also pay attention to different learners' behaviors while providing them with corrective feedback since individual student writers respond differently to teacher feedback; (2) A number of other factors like the learners' L1 background, their cognitive and affective individual predispositions and factors such as social and pedagogical context might influence the way learners consider and apply feedback they have received. On the whole, this body of case study work on teacher response to student writing provided models for our own research design. Another important point is that this was a naturalistic study, so the instructor taught writing and gave feedback exactly as he would have if any research endeavor had not been present. So, the data included the texts written by the students and the teacher's feedback on these drafts. Participants and setting In order to investigate how leaners with different cognitive characteristics respond to and learn from different types of feedback, the researchers chose the students of an essay writing course in University of Hormozgan, Iran. The class was held by one of the researchers during the whole semester and the students received instruction on different methods of supporting ideas and practiced how to write expository and argumentative essays in English. Due to pedagogical concerns and ethics of instruction, all twenty-two students in the class received feedback on different aspects of writing after completing their assignments. However, through the purposive sampling technique, only 4 students, who had fully completed all their assignments and received the intended feedback, were selected as the main participants of the study. More specifically, two students as the representative of individuals with High-and Low-level aptitude scores and two students receiving High and Low scores on a working memory test were selected as the participants of the study. In other words, based on the design of the study, they were treated as individual cases and on the basis of their cognitive (aptitude and working memory) characteristics were matched with each other and their individual profiles were compared in terms of how they responded to different types of feedback and how their individual characteristics inhibited or facilitated their learning from and retention of various types of feedback. Instruments used to assess the students' cognitive characteristics 3.2.1. Foreign language learning aptitude test THE COLLEGES OF OXFORD UNIVERSITY CLASSICS LANGUAGE APTITUDE TEST (Specimen of Written Test at Interview Issued 2010) was used to assess Iranian EFL learners' aptitude in learning a second language. This instrument aims at measuring the extent to which EFL learners have the required potential to pursue and go through the challenging process of learning a second language. The test contains three parts targeting the sub-constructs assumed to represent foreign language learning aptitude and measuring the students' ability in paired associates, verbal intelligence and grammatical sensitivity. In order to ease and ensure the students' understanding of the test and to make the test more valid for use by Iranian EFL learners, the instructions were translated into Persian. Working memory test A computerized Persian version of reading span test (RST) developed by Shahnazari (2011) was used to measure the participants' working memory capacity since it is believed that working memory is language independent and in order to avoid conflating WM and L2 proficiency must be conducted in the students' mother tongue (Miyake & Friedman, 1998). In the administration session, each individual student was required to read sets of sentences (a total of 64 items: 10 practice session sentences and 54 test sentences) on a computer screen in PowerPoint format and report on the semantic acceptability of each sentence (processing assessment), and then recall the final word of each sentence when prompted (storage assessment). In order to facilitate the students' response to this test, the researcher designed a sheet including some instructions and examples for how to perform on the test and a set of slots to enable the students to write their responses regarding the semantic plausibility of the sentences and the recalled words for each set of the sentences. Writing assignments and feedback offered As was previously mentioned, the students received instruction on different methods of paragraph development and practiced writing expository and argumentative essays. Their written assignments and the feedback received served as the main data of the study based on which an individual profile/narrative was constructed for each student. The first assignment students were required to write was a descriptive paragraph for which they described either a place they have visited or a person they are familiar with. After completing and delivering the assignment, the students received direct feedback which involves reformulating and rewriting the learners' texts while attending to errors in the linguistic aspects of their texts (Thornbury, 1997). The second assignment was a cause and effect paragraph for which the students were required to write about the causes and effects of some common issues in their lives like causes of car accidents, effects of women working outside home, etc. For this assignment, the students benefited from indirect feedback with annotations which refers to the use of codes marking the types of errors made by the learners (Storch & Wigglesworth, 2010). Subsequently, they wrote a comparison and contrast paragraph for which they received metalinguistic feedback. In metalinguistic explanation, some negative evidence or (implicit) clues as to the rules of language are provided for the learners to enable them to understand the nature of errors committed and correct the erroneous parts. Metalinguistic feedback is defined by Lyster and Ranta (1997, as cited in Ellis, Loewen, & Elarm, 2009, p. 304) as "comments, information, or questions related to the well-formedness of the learner's utterance". A point worth-mentioning is that after receiving feedback for each assignment, the students were required to reflect upon the feedback and try to revise their texts or correct their mistakes accordingly. The students also completed two expository and argumentative essays which served as the standpoint to see whether they have learnt anything from and incorporated the feedback received while performing on the subsequent tasks or not. Procedure of data collection and analysis For the purpose of the current study, the students of an essay writing course were chosen and then their written texts were analyzed for errors in the linguistic and discoursal aspects of the texts produced. Subsequently, these learners were provided with different types of feedback and then they were required to revise their texts based on the feedback received. More specifically, five different texts from each student were collected: a descriptive paragraph, a comparison and contrast paragraph, a cause and effect paragraph, one three paragraphs expository essay and one five paragraph argumentative essay which were written during the classroom sessions. The students received feedback for the first three paragraphs and were required to revise their texts. It is worth-mentioning that we adopted a focused feedback approach in which only the linguistic aspects of the texts (structure, vocabulary and cohesion) produced by the learners were targeted. Targeting only the linguistic aspects of the text can be justified by reference to recent research practice on feedback which has confirmed the effectiveness of a focused approach in which teachers target a selected number of error types in their feedback provision practice compared to the comprehensive feedback which may lead to inconsistent and inaccurate correction due to teachers' fatigue and burn-out and thus having a demotivating effect for the students who are confused and frustrated by many error codes, underlines and corrections in their papers (Lee, 2013; Sheen, Wright, & Moldawa, 2009). As for analyzing and reporting the data in this phase, at first the researchers created data files/narratives for each of the 4 students based on their marked and revised texts, their performance on the subsequent tasks which showed their responses and learning from feedback, and their preferences for different types of feedback. The narrative construction approach was selected since it provides a systematic and integrated way to organize the various pieces of information about each individual writer and then to compare the findings with other individuals. In fact, the main written texts and the students' revisions were compared to see how they respond to different types of feedback and the essays served as a stand-point to compare different students' learning and consolidation of corrective feedback provided on the linguistic aspects of writing. In fact, a qualitative comparative analysis (QCA) technique was used to compare the narratives already constructed for the individuals in order to compare the pair's performance, identify general patterns in the data and reach a meaningful interpretation of the patterns displayed by the case study participants who have undergone the treatment process (Schneider & Wagemann, 2007). Finally, the participants' views and preferences regarding the effectiveness of these different types of feedback were sought using an open-ended survey question: this question asked them which feedback strategy they preferred more and why. The contribution of aptitude to the individuals' responses to feedback: Fatemeh vs. Shahryar Fatemeh, as the representative of learners with a high level of aptitude, has a rather good competence in writing and for the first assignment, she has effectively combined description and anecdote, as the techniques of support, to describe one of her family members. The content and sequence of presenting the ideas were well-organized as well. She has also used a combination of simple and complex sentence structures that are generally error-free. The two main errors that were underlined and explicitly corrected for her were a run-on sentence and a missing past tense marker: Run-on sentence: My aunt's family and mine were congratulating this festival and saying hello to each other that in this time, I saw a boy who was tall, thin and has short-black hair. Explicit correction: While my aunt's family and mine were congratulating this festival and saying hello to each other, I saw a thin and tall boy with short black hair. The second assignment was a cause and effect paragraph for which the students received indirect feedback with error codes. Herein, due to the complexity of the task and the tension she might have felt while completing the task in the classroom, Fatemeh has committed two errors in sentence structure (SS) and five errors in form of the verbs (VF/VT) plus some other minor mistakes: Error and feedback in sentence structure: Car qualities are so low and even if the driver observes all the rules, but broken device or piece of mechanical structure can cause a disruption and finally an accident. (SS) Error and feedback in verb forms/tense: A person who drives a car should be focus (VF) on driving and when s/he talks with cell phone (Com) the amount of focusing in driving process will be reduce (VF) and cause (VF/T) accident. It is worth-mentioning that after receiving the feedback, the students were required to reflect upon the feedback received, ask their questions about the error codes to remove any vagueness in their understanding of the codes and consequently correct their mistakes and even revise the whole drafts. For the third assignment, the students were required to write a comparison and contrast paragraph and received metalinguistic feedback with comments and explanation as the feedback technique. Again Fatemeh has written a rather good text in terms of content and organization, but naturally she has made new types of errors. The reduction in the number of errors in verb forms shows that she has reflected upon the feedback received in the previous assignment and has been careful about this structure while writing this new task. Despite this improvement, she repeats a number of errors in the sentence structure, which can be partly attributed to her attempts in using complex sentence structures. Errors and metalinguistic feedback in sentence structures: First one is that, (not an affective introductory clause), s/he does the sport himself. Because most of (missing article) needs (subject-verb agreement) to be exercised, she can practice (what?!! The intended meaning is not effectively expressed). Although a person involve with this way (Incorrect subordination), not only has wasted his/her time but also money, too (Imprecise parallel structure). Due to dominance of these types of errors in her text, the teacher-researcher provided further explanations about the structure of English sentences. These points tried to illustrate to her how to move from simple to compound and then to complex sentence structures and how to use parallel structures. My contact with her revealed that she is a very ambitious and conscientious writer who is very concerned about her writing development. After these three writing tasks, the instructor started teaching principles of expository and argumentative essays and students wrote two essays in these genres. For these assignments, the instructor's feedback mostly focused on the content and organization of ideas and feedback on the linguistic aspects of the texts was only provided when the erroneous parts disrupted the intended meaning. These two essays served as reference points to see whether the students have learnt anything from the feedback received and whether they have been able to keep (i.e., retention of feedback) and transfer this knowledge while performing on new and more complex tasks or not. As for her performance in the expository essay, Fatemeh has written a well-managed essay which shows her great care and competence in writing. The rewarding point is that despite the fact that she has written a longer text compared to the previous assignments for which the students were required to write a single paragraph, she has managed to write a more conceptually refined and structurally accurate text. In addition, one of her main problems in the previous assignments that was related to the structure and forms of the verbs has been resolved in many cases in this task except for this part: Nowadays, earning money and those issues which are related to it are the most challenging part of everyone's life. It has been affect many aspect of human life. If there be a problem in financial aspect of life, it would cause to disrupting comfort and makes stress for those who are involved. The most interesting point about her performance is her response to the feedback received on the parallel structures which were erroneous in the previous assignment and for which she received thorough metalinguistic feedback with comments and explanations. In order to show her understanding of the feedback, she has purposefully used a parallel structure which, except for the be verb that should have agreed with the subject in terms of person and number, is structurally accurate. This simple point might be considered as evidence for claiming that this participant has learnt and applied the feedback received on the parallel structures: Not only financial problems but also change of employment are the most stress makers for nowadays people. However, the new pattern of errors which emerged in her text is related to the use of some imprecise vocabularies and lexical expressions. This problem is very common for less competent EFL learners who do not have enough access to authentic English written and spoken texts and as a result use the imprecise equivalents of words and expressions from their first language while producing a communicative output in the second language. For instance, she has written Life is involved challenging issues instead of life is full of or entangled with challenging issues or your mind get in to solve the problem instead of your mind is engaged in solving the problem. For the final assignment, the students were required to write in a more complex genre, i.e., argumentative essay, which has its specific rhetorical organization and would possibly affect the students' attention and performance on different aspects of writing. This complex task can serve as a stand point to see whether the students can keep and apply what they have learnt from the feedback for a longer time and write a better text or not. While confirming the efficacy of feedback in enabling the learners to resolve the problematic issues in their performance and to perform better in the subsequent tasks, the performance of Fatemeh in the argumentative essay indicated that complexity of writing tasks also affects students' performance in writing and can influence their learning from feedback. In fact, since the students in this task have been more engaged with the use of appropriate ideas and organization of the content, it is natural to see errors in other aspects of writing. Fatemeh's preoccupation with the organization of higher order levels of meaning has made her commit errors of the same nature (in sentence structure, verb forms and precision of words) but different in forms: This issue has been effected on most of not only individual life but also society conditions which will be effective on treating future generation. Children who were faced with such problems and were growed up with this situation will cause bad treatment or some negative bhaviours (shyness) of them and it will arise generation who are going to live in society. These fluctuations in the performance can possibly be interpreted as her inability to effectively learn from feedback and consolidate this knowledge. Regarding her preference for different types of feedback (namely, explicit correction, indirect feedback with error codes and metalinguistic feedback with comments and explanations), Fatemeh, because of favoring experiential and discovery-oriented approaches to learning prefers indirect feedback with error codes and maintains that: Based on my understanding of myself, I like to understand my mistakes by experience and if I receive direct feedback, I will possibly remember it at that moment but it won't become an experience for me. This type of feedback also does demolish the confidence of the learners. Shahryar, as the representative of students with low aptitude levels in the present study, has an intermediate competence in writing. As for the descriptive paragraph, he has described one of his friends. Compared to the descriptive text written by Fatemeh which was innovative in content and rather precise in structure, Shahryar's descriptive paragraph is very short and full of grammatical errors. The following are some of the erroneous and run-on sentences in his text that are explicitly corrected: Erroneous sentence: My friend is a very genial/affable person, he talks with everyone that meet every where easily, so outgoing. Explicit correction: My friend is a very genial and outgoing person who easily talks with everyone he meets. Erroneous sentence: Although he is very relax person but sometimes he becomes very nervous and angry easily but most of the times he tries to make us laugh with his jokes, he has sense of humor. Explicit correction: Despite of being a relax person, he sometimes becomes very nervous and angry. However, he has a good sense of humor and most of the times tries to make us laugh with his jokes. The same patterns of errors occurred in the cause and effect paragraph for which he received indirect feedback with error codes: About the roads that are one of the major reasons (MW, Com) we should express that if we don't have good and proper roads (Com) the (Art) more accidents will happen. But the main reason in my idea is the driver itself (WF) (Ro) if she or he consider all the driving rules (Com) absolutely accident will decreases (VF). In fact, since he is not competent enough in appropriately connecting the ideas with each other, there are many cases of run-on sentences in his texts. Some of the errors can be attributed to his careless and perfunctory manner in writing because it seems that he has not been effectively engaged in the process of writing and has not done any revisions. The analysis of his performance in the comparison and contrast paragraph indicates that despite some problems like missing punctuation marks, incorrect prepositions, imprecise words, misplaced conjunctive adverbs, etc., he had successfully managed to break his sentences and could write shorter and more precise sentence structures. To put it in another way, previous feedback has possibly urged him to be more careful and try to improve the structure of the sentences. An idea unit in the comparison and contrast paragraph: First of all (punctuation mark is missing) with watching sports on TV, we can save our money, time and etc. We can learn the the techniques of players on TV and then use it in real playing by ourselves (try to make it more precise to convey your idea in a better way). On the other hand (this conjunction is used to show contrast not similar ideas) there are also advantages (missing preposition) playing sports yourself. It can help to your healthy (make it more precise) and also you can meet your friends and talk to them. The possible learning form feedback by this participant was further scrutinized by evaluating his performance in the expository and argumentative essays. Despite some problems in the overall unity and connections of the ideas to each other (i.e., cohesion and coherence) and the existence of some simple and in some cases, imprecise words and expressions in the essays, he has again tried to write shorter and more accurate sentences. Notwithstanding this effort, similar to the performance of the high aptitude learner, there are pieces of evidence of fluctuations in his performance and he has written sentences that are unnecessarily long and suffer from some grammatical mistakes: An ungrammatical and run-on sentence in the expository essay: But one point which you should not forget is that you should know your talent and your favorites then go for it and try to be the best in your field and find a suitable and proper job which can rich you in your life so be creative and be hardworking. An ungrammatical and run-on sentence in the argumentative essay: As the result of enhance the level of expectations unfortunately the number of divorces has increased in our society and the couples seeking for separation while their children especially in early years of their lives need to be with their parents. Regarding his preference for different types of feedback, this participant prefers metalinguistic feedback because he thinks by this feedback he could consciously learn rules and he had tried to apply them in his writing. The comparison of responses of high and low-level aptitude learners to different types of feedback indicated that both learners could learn and apply the specific types of feedback for a short term, but for varying degrees. It was also revealed that the complexity of the task and the students' level of engagement could affect their writing and learning from the feedback. On the whole, it can be asserted that Fatemeh who has a higher level of aptitude and hence a higher level of writing competence, has been able to benefit from the feedback in a more effective way and, as a result of the feedback received had been able to write more refined and complex sentence structures. have reached in writing. However, it is natural to see some local errors that do not greatly disrupt or affect the chain of ideas/meaning in her text: Maybe some of them when are drunk do driving (SS) which lead (Prep) accidents. Second, terms of establishing (WW) driver license for people somehow (WO) is easy. She has also made some local level errors, especially in terms of the precision of words and expressions used, in the comparison and contrast paragraph: The view of cities and buildings seems more beautiful in compared with in (find correct expressions). Long years ago (try to use a more precise expression + use correct punctuation mark) houses and buildings had a simple structure and traditional structure. For the expository essay, she has written a coherent, well-organized and well-managed text. She has also used a variety of simple, compound and complex sentence structures in this essay; however, there are some cases of local errors in the form of the verbs as well: Although a lot of people are well-educated, knowledgeable and skillful, they are not successful to find their ideal job to live better. They will deal to a lot of challenges when they begin to find a job for themselves. In spite of have being less job opportunities, there are some effective steps which help you find a good job. The performance of this participant in the argumentative essay indicates that as the task becomes more complex, the management of different aspects of writing, even for this student who has a high working memory score, becomes more challenging too. As a result, the number of errors increases while performing such complex tasks. The following extract is the introductory paragraph Somayeh had written for the argumentative essay. The main reason for the abundance of errors in this text can be her occupation with the content and sequencing of ideas because the students had been required to briefly include, to the extent possible, all the necessary argumentative moves such as introducing the topic, stating the significance of the topic, presenting and refuting the counterargument and coming up with a decisive thesis statement in this introductory paragraph. Despite her high score in the WM test as an indication of her high WM capacity, she has not been able to give a balanced attention to all aspects of writing in this paragraph and consequently, she has made many errors in the structure of sentences, connections between ideas and choice of words: This fact is obvious that today media, a large number of TV programs and magazines get into couples' issues which deal to divorce. Definitly divorce has a lot of bad affections that influence either parents or children. However most of people and psychologists believe the children must be supported by parents, so they should stay together. Nonetheless the fact might be true, but if parents stay together bad affections of this cool relationship might influence both the parents and the children. Also the best years of the parent's life and their youth will be lost. After relieving from this tension, she has written more structurally refined and semantically precise sentences even immediately after this introductory paragraph and in the rest of the essay, which confirm the idea that the complexity of the task can highly affect the precision of writing and the students' learning form and use of the feedback already received. The first body paragraph in the same argumentative essay: Those who believe parents should stay together state that the children must be supported emotionally by their parents to go through a normal way of life. To be honest, this opinion sounds logical since in a normal way a child has to live with his family that includes father, mother and the child. In this way, the children feel comfortable. But the more important fact is staying in a cool relationship and behaving in a surface way by parents in very offensive for children. They understand there is no warm and friendly atmosphere in their home. Seeing such a picture in every day's life creates a negative imagination in children's mind. Also they will have a bad mental and emotional pressure. Regarding her views about different types of feedback received, Somayeh prefers metalinguistic feedback with comments because she thinks that by such a feedback strategy, they can be more directly aware of the aspect of their problems and, thus, can make informed decisions to solve those problems and improve their writing. Fatemeh is an upper-intermediate proficiency level student of English Language Teaching. Her score in the WM test, which assessed both processing and storage of information, indicated that she has a rather low working memory capacity. In the descriptive paragraph, she is describing her cousin. In this text, the grouping and connection between the ideas that are necessary for creating a coherent text are not well-handled and some of the lexical expressions used are not precise enough. These are the introductory sentences for which she has received explicit correction: A person I will always remember is my cousin because of his appearance and gentle manner. The first thing I notice when I look at him is his size. He stands at shoulder height next to me; indeed, he is a head taller than other children his age, and definitly stronger, recently, his father signed his up for football, because it is a good thing the children play football to training for future. Explicit correction: A person I will always remember because of his appearance and gentle manner is my cousin. The first thing I notice when I look at him is his height. When he stands next to me, he has a height close to my shoulder. Indeed, , he is a head taller than other children of his age and is definitely stronger. Recently, his father had signed him up in the football team as a training for his future. Compared to the rather refined text written by the student with high working memory capacity, the analysis of cause and effect paragraph written by this student revealed many grammatical errors from mistakes in spelling to run-on sentences, which have made her whole text ineffective: There are many reason (PL) why sports teams (WF) are unsuccess or failure (WF). … Also becaus (Spl) of careless (WF) of their managers (Com) the team cannot achieve success and caus (WF) emotion pressur (Spl) and absurd-mindedness (RO) sometimes they are far away from their families and it is the other emotion pressur that bother (AGR) teams that they are do exercises on another country for example (SS). There are also many grammatical errors in the comparison and contrast paragraph she had written: … However, playing football is not dangerous as much as wrestling but also they insured in sometimes but a little (is it a correct parallel structure?). They have more facilities and also have good income. Although they are sport and fit your body but they have different condition to achieve tier goals (incorrect subordination + sentence structure needs revision). Despite receiving thorough metalinguistic feedback with comments, these patterns of errors continued to the expository essay in which the ideas are mixed and most of the sentences are imprecise. The following extract is the introductory paragraph she had written for the expository essay: Government says that there are just %20 of young population jobless. It's true or not? is there any jobs or young people search for the jobs that is hard to find for to be luxury today by this special situation of life (hard to find good job) the job seeker must to have those three most important features and well done in prepare a letter of application, conduct an internet job search and perform will in the interview. Errors of the same nature and her perfunctory and unmotivated manner in writing continues to the argumentative essay which is more complex in nature and demands more sophisticated levels of mental processing. She had written the whole essay in three short paragraphs that are quite inconsequential in terms of the appropriateness of the content and adequacy of the required argumentation. Despite the fact that she had reduced the length of sentences, she had not been able to use the feedback received and could not write a more structurally accurate and semantically refined text. The following extract is the whole text she had written and delivered to the instructor; however, it is not clear whether she had considered it as the introductory paragraph or as the entire essay: Nowadays the number of divorce has increased in lots of the reason. It must be controlled by systematic organization to have good and health society. According to this reasn the children are the most important part cause this problem can affected them easily and chang their future life. Some sociologist have their own opinion that tell the parent cause of their child and to keep their family shoulden't be sperate. Is it an opinion but the issue is not just to keep the form and family member together caus it effect wors. if the problems become worse and intensify it's affect on the emotion and their mental health. The abundance of grammatical errors in her text indicates that she has not paid any attention to the feedback received and continues to write in her own style. Part of the reason for these problems can be attributed to her inadequate level of competence in writing which cannot be developed by mere feedback and requires a high level of commitment and practice to be improved. Moreover, it seems that she has not been fully engaged in the process of writing, might not have fully noticed and processed the feedback received (i.e., her level of engagement with feedback is low) and only writes whatever comes into her mind without any concerns for the proper organization of ideas or monitoring the structure of sentences. Regarding the impact of WM on her writing compared to the performance of Somayeh who had a high WM capacity, Fatemeh has a limited control over different aspects of writing and her texts are full of grammatical and syntactic errors. In fact, this low WM capacity has not enabled her to sustain and perform well in writing and to learn from and apply the feedback received in subsequent writing tasks. Finally, similar to her peer, she prefers metalinguistic feedback because she thinks comments and explanations had made the points more clear for her and she had known the reasons for her errors that could help her know her deficiencies in writing in English. Discussion After qualitatively comparing the performances of learners with different individual characteristics on different writing tasks and scrutinizing their responses to different types of feedback, a number of considerations and patterns emerged in the data that are presented and discussed here. The initial and the most important observation which can be driven by the findings of the present study is that different individuals respond to and benefit from the learning potentials of different types of corrective feedback in different and their own unique ways (Ferris, 2012;Kormos, 2012;Rahimi, 2015) to the extent that learners having similar individual characteristics might again perform differently while facing the feedback. The finding of the present study also supports Ferris's (2010) conceptualization that "some students benefit more from CF than others, for a variety of reasons such as motivation, learning style, and metalinguistic background knowledge" (p. 197). In fact, the findings of the present are in line with the idea that learners' individual characteristics and contextual factors affect how and to what extent they respond to the teacher's feedback and develop their interlanguage (Ellis, 2010). This different level of engagement and learning from feedback could also be attributed to the individual factors of student beliefs and goals, and contextual factors of student-teacher relationship (Zheng, et al, 2020). It was also found that learners with higher levels in aptitude and WM had a better response to feedback, albeit to varying degrees. In fact, the student having a higher aptitude score has been able to create tests which are more structurally refined. This finding also confirms the association between aptitude and syntactic knowledge which enables the student writers to engage in the efficient grammatical encoding practice and to write more grammatically accurate and structurally complex texts (Kormos & Trebits, 2012). Being considered as a dynamic and complex construct, aptitude has also been considered as an important predictor of foreign language learning and students' performance in all language skills (Gilabert & Munoz, 2010). In line with the findings of present study, Shintani and Ellis (2015) and Benson and DeKeyser (2018) have also highlighted the significant role of language learning analytic ability (as a component of aptitude) on the learners' uptake from direct and metalinguistic feedback. The qualitative analysis of the narratives also confirmed the important role of working memory in planning, execution and monitoring stages of writing which makes huge demands on writers' cognitive processes since the number of things that must be dealt with simultaneously is stupendous. Consequently, writers face cognitive overload while composing a text which negatively affects their level of engagement in the writing process (Flower & Hayes, 1981;Kellogg,1996) which will definitely affect their retention and use of the feedback received to a great extent. The finding of the present study in terms of the positive role of WM in students' learning from WCF is in line with Li and Roshan's (2019) finding that complex working memory is a positive predictor of the effects of metalinguistic explanation and the effects of direct corrective feedback plus revision. However, they recommended that writing tasks should be adapted so that they can accommodate learners with different working memory ability levels. In other words, their findings highlighted "the value of an interactional approach to the role of individual difference factors, that is, the influence of cognitive factors is not fixed; rather, it depends on whether there is a fit between the processing demands of the learning task and the learners' cognitive strengths" (p. 12). Despite the fact that the given feedback could, to some extent enhance some of the students' performance in designated aspects of writing, it was not able to resolve all these problems in the students' subsequent texts. This further adds to the debate over the effectiveness of the corrective feedback since some researchers doubt over its potential in improving L2 learners' writing accuracy in their subsequent writings (e.g., Fazio, 2001;Truscott, 1996Truscott, , 1999Truscott, , 2004Truscott, , 2007. In fact, many of the problems found in the students' drafts were related to the students' prerequisite knowledge in grammar and vocabulary which should have already been established. Therefore, effective teaching of vocabulary and grammar as separate courses or complemented with writing can greatly affect the students' accuracy and fluency while writing. The dominant error made by the participants in the present study was related to the sentence structures which were mainly run-on. Rahimi (2009) suggests that this phenomenon stems from the interference of learners' L1 (Persian) and the dominant teaching method, i.e., grammar-translation method, in the EFL context of Iran and could be effectively resolved by teaching correct sentence structures in a more practical way because most Iranian EFL learners consider grammar as a set of rules to be memorized rather than as a tool for generating accurate and fluent communicative output. Moreover, as Hu (2003, as cited in Yang & Lyster, 2010) believes "the exclusive use of traditional grammar translation approaches is problematic and results in learners who are able to achieve high scores on discrete-point grammar tests yet unable to communicate fluently and accurately in communicative context" (p.236). There were also some fluctuations in the students' use of feedback while performing on different tasks, yet as Vygotsky (1978) maintains, development and learning involves both progressive and regressive moves, and regressive moves are also helpful in moving the learners forward. In many cases, the students, despite being able to remove some of their mistakes in the following tasks as a result of feedback received, have repeated errors of the same nature in the subsequent tasks. This might also be interpreted as their incompetence in long term retention of the feedback or their inability to effectively incorporate the feedback received. Moreover, the analyses of the students' responses to feedback indicated that most of the learners performed better and mostly incorporated the feedback in the expository essay which served as a standpoint to show short term retention of feedback. This finding is in line with the studies that found feedback on the form could help improve students' writing in the short-term (Ashwell, 2000;Fathman & Whalley, 1990;Ferris & Roberts, 2001). However, transfer of learning and long-term retention and use of feedback did not occur for the argumentative essay which was structurally more complex. In Robinson's (2001) definition, ''task complexity is the result of attentional, memory, and other information processing demands imposed by the structure of the task on the language learner'' (p. 29). In fact, the complexity of this task which required balanced attention to various aspects of writing greatly affected the students' performance and their usage of the feedback received. In this case, the complexity of the argumentative essay has been even challenging for the students who have a high working memory score, which confirms the idea that the complexity of the task can highly affect the precision of writing and the students' learning form and use of the feedback already received. Finally, most of the students prefer metalinguistic feedback and it seems to be more effective feedback providing a methodology for Iranian EFL learners. This finding confirms the general observation that learners have positive attitudes towards teachers' written feedback and expect their instructors to provide them with some feedback and comments on their errors ( (Jang, 2020). In the same regard, as was evident in the analysis of cases, whether and which type of feedback is effective to improve the learners' noticing of and learning from feedback depend on a complex and dynamic interaction of an array of linguistic and individual factors (Storch & Wigglesworth, 2010). In the same vein, teachers might need to spend more effort when planning and giving CF and consider more effective strategies to meet learners' needs Conclusion The present study explored how learners with different individual characteristics respond to and learn from different types of feedback provided on linguistic aspects of their texts. The qualitative analyses of the students' written texts and the feedback they had received corroborated the view that different individuals respond to and benefit from the learning potentials of different types of corrective feedback in different and their own unique ways. Feedback could also improve only some aspects of the students' writing and uptake form feedback only occurred for those features that were important for the learners and they paid conscious attention to them. In other words, uptake and learning from corrective feedback is highly dependent on the learners' depth of engagement with errors and their own concerns for their writing development (Storch & Wiggleworth, 2010; Troia, Harbaugh, Shankland, Wolbers, & Lawrence, 2013). In fact, student engagement with written corrective feedback is generally believed to correlate positively with academic achievement, language acquisition and writing development (Zhang & Hyland, 2018). On the whole, the findings of the present study confirm the important role of considering learners' individual characteristics in any pedagogical intervention. In fact, by equipping themselves with the insight and knowledge of the learners' personal attributes and the important role they play in the learning process, teachers would have a chance to better design their instructional methods and make use of the most suitable learning activities that can respond to the individual learners' needs and thus can enhance the quality of their learning. Funding: This research received no external funding.
2021-07-27T00:05:10.492Z
2021-05-30T00:00:00.000
{ "year": 2021, "sha1": "5b6e5705026eb7efb0df25550b12f7c869f1943c", "oa_license": "CCBY", "oa_url": "https://al-kindipublisher.com/index.php/ijllt/article/download/1743/1423", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "654ffaa155729af9c8da239c35d1b6fbb4ae4bc3", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
256864900
pes2o/s2orc
v3-fos-license
East Coast Fever Carrier Status and Theileria parva Breakthrough Strains in Recently ITM Vaccinated and Non-Vaccinated Cattle in Iganga District, Eastern Uganda East Coast fever (ECF) is a tick-borne disease of cattle that hinders the development of the livestock industry in eastern, central and southern Africa. The ‘Muguga cocktail’ live vaccine, delivered by an infection and treatment method (ITM), remains the only immunisation strategy of controlling ECF. However, there are challenges of the live vaccine inducing ECF carrier status in immunised animals and the possibility of lack of protection from parasite strains that are antigenically different from the vaccine strains. In Uganda, there are insufficient data regarding the ECF carrier status and T. parva genetic diversity in vaccinated and associated non-vaccinated cattle to assess the effectiveness of ITM vaccination. Blood was collected from recently ECF vaccinated (98) and non-vaccinated (73) cattle from Iganga district in Eastern Uganda at 120 days post-vaccination. The p104 gene nested PCR was used to screen for T. parva DNA, 11 minisatellite and 3 microsatellite markers (SSR) were used for genotyping. Two minisatellite markers (MS7 and MS19) were used to determine whether ECF carrier status was due to the T. parva vaccine or local strains. The prevalence of T. parva based on p104 nPCR was 61.2% (60/98) (RR 2.234, 95% CI 1.49–3.35, p-value < 0.001) among recently vaccinated cattle and 27.4% (20/73) (RR 1.00) among associated non-vaccinated cattle. The Muguga cocktail vaccine strains were responsible for carrier status in 10 (58.8%) by MS7 and 11 (64.7%) by MS19 in vaccinated cattle. Genotypes of T. parva with different-sized alleles to the vaccine strains that could be potential ‘breakthroughs’ were detected in 2 (11.8%)) and 4 (23.5%) isolates from vaccinated cattle based on MS7 and MS19 minisatellite markers, respectively. Using 14 SSR markers, T. parva diversity was higher in vaccinated (Na = 2.214, Ne = 1.978, He = 0.465) than associated non-vaccinated (Na = 1.071, Ne = 1.048, He = 0.259) cattle. The principal component analysis (PCA) showed isolates from vaccinated cattle were closely related to those from non-vaccinated cattle. The analysis of molecular variance (AMOVA) revealed high genetic variation (96%) within T. parva isolates from vaccinated and non-vaccinated cattle but low variation (4%) between vaccinated and non-vaccinated cattle. This study reveals the role of ITM in inducing the carrier status and higher T. parva genetic diversity in vaccinated cattle. The low genetic variation between T. parva isolates in both vaccinated and non-vaccinated cattle may be suggestive of the protective role of vaccine strains against genetically related local strains in the study area. Introduction East Coast fever (ECF) is a tick-borne disease of cattle which is caused by a hemoprotozoan parasite called Theileria parva (T. parva). The parasite is transmitted by a brown ear tick (Rhipicephalus appendiculatus) and it is highly prevalent in eastern, central and southern Africa. East Coast fever (ECF) ranks first among the important tick-borne diseases of cattle in the region and causes high mortality, especially in exotic and cross-bred cattle, as well Study Area The study was conducted in Bulamagi and Nakigo sub-counties in Iganga district in the eastern region of Uganda ( Figure 1). The coordinates for Iganga district are: Latitude: 0 • 36 33.01 N, Longitude: 33 • 28 7.00 E. Districts along its boarders are Bugiri to the east, Namutumba to the northeast, Kaliro to the north, Kamuli to the northwest, Jinja to the west, and to the south lies Mayuge. The district has an average annual temperature of 22.3 • C and an average annual rainfall of 1313 mm, which are optimal for the survival of the livestock and disease vectors such as ticks. There are two relatively drier seasons (December to March and June to July) [18]. There is sufficient rainfall throughout the year to sustain man and animals except in very rare circumstances that sometimes lead to drought. T. parva genetic diversity was assessed from the vaccinated and non-vaccinated cattle. Data generated from this study provide additional information on the establishment and persistence of carrier status and genetic diversity of T. parva in the vaccinated and non-vaccinated cattle in an ECF endemic area. This is important for the use of ITM in the integrated control of ECF in endemic areas. Study Area The study was conducted in Bulamagi and Nakigo sub-counties in Iganga district in the eastern region of Uganda ( Figure 1). The coordinates for Iganga district are: Latitude: 0°36'33.01" N, Longitude: 33°28'7.00" E. Districts along its boarders are Bugiri to the east, Namutumba to the northeast, Kaliro to the north, Kamuli to the northwest, Jinja to the west, and to the south lies Mayuge. The district has an average annual temperature of 22.3 °C and an average annual rainfall of 1313 mm, which are optimal for the survival of the livestock and disease vectors such as ticks. There are two relatively drier seasons (December to March and June to July) [18]. There is sufficient rainfall throughout the year to sustain man and animals except in very rare circumstances that sometimes lead to drought. Study Design This was a cross-sectional study that involved sampling of cattle based on the ECF-ITM vaccination status, whereby both recently vaccinated (120 days post-vaccination using Muguga cocktail vaccine) and non-vaccinated cattle from the same farms. Vaccination status was confirmed from farmer records and special ear tags were put on the ECF vaccinated cattle. Fifteen cattle farms were selected from two sub-counties (Nakigo and Bulamagi) in Iganga district. Blood (5 mL) was drawn from the jugular vein into the EDTA vacutainer tube and placed in a cool box for short-term storage and, thereafter, transported on ice to the Molecular Biology Laboratory (MOBILA), College of Veterinary Medicine, Animal Resources and Biosecurity (COVAB) for laboratory analysis. Study Design This was a cross-sectional study that involved sampling of cattle based on the ECF-ITM vaccination status, whereby both recently vaccinated (120 days post-vaccination using Muguga cocktail vaccine) and non-vaccinated cattle from the same farms. Vaccination status was confirmed from farmer records and special ear tags were put on the ECF vaccinated cattle. Fifteen cattle farms were selected from two sub-counties (Nakigo and Bulamagi) in Iganga district. Blood (5 mL) was drawn from the jugular vein into the EDTA vacutainer tube and placed in a cool box for short-term storage and, thereafter, transported on ice to the Molecular Biology Laboratory (MOBILA), College of Veterinary Medicine, Animal Resources and Biosecurity (COVAB) for laboratory analysis. Genomic DNA was extracted from cattle whole blood using the purelink TM genomic DNA mini kit (Invitrogen, Waltham, MA, USA). The DNA concentration and purity were determined using UV spectrophotometry (nanodrop) at 260-280 nm. The DNA was screened for presence of T. parva parasites using primers designed from p104 gene by nested PCR [19]. Positive p104 nPCR T. parva isolates were genotyped using fourteen satellite markers (SSRs): 11 mini-(MS) and 3 microsatellite (ms) markers; (MS7, MS19, MS3, MS16, MS8, MS21, MS25, MS27, MS33, MS34, MS40, ms2, ms5, and ms7) [15]. The data generated were analysed to determine the prevalence, genetic diversity and persistence of carrier status of T. parva in recently vaccinated and associated non-vaccinated cattle in Iganga district. The MS19 and MS7 satellite markers were used to assess whether the carrier status was due to the vaccine or local parasite strains. Recruitment and Vaccination of Animals The animals were recruited into the study after a written informed consent by the cattle farmers. The inclusion criteria were based on animals ≥ 8 months old without fever (rectal temperature < 39.5 • C) and non-pregnant for the female animals. Cattle were ear-tagged and weighed. The immunisation procedure was carried out as described previously [8,20]. The immunised animals were injected with 1 mL of 100× dilution of MCL01 stabilate. The vaccine was inoculated subcutaneously in front of the right parotid lymph node and the animals were treated simultaneously with 30% long acting Oxytetracycline (Tetroxy LA, Bimeda, Dublin, Ireland) at a dose rate of 30 mg/kg body weight by deep intramuscular injection. The vaccine used was the trivalent formula known as Muguga Cocktail composed of Muguga, Kiambu-5 and Serengeti-transformed stocks [21]. The vaccine was purchased from SCOPEVET, (an authorised ECF vaccine importer/supplier in Uganda). The animals were monitored daily for 28 days post ITM vaccination for detection of clinical symptoms by the farmers and the Resident Veterinary Officers. Detection of p104 Gene and Genotyping of Theileria parva Isolates Using Mini-and Microsatellite Markers by Nested PCR Genomic DNA was extracted from 160 µL of EDTA vacutainer collected cattle blood using the purelink TM genomic DNA mini kit (Invtrogen, USA) according to manufacturer's instructions. The extracted DNA was stored at −20 • C until needed for T. parva screening and genotyping using nested PCR with p104 and SSR primers (Eurofins Genomics at GmbH, Viehmarktgassen 1B/Büro, AT-1030 Vienna, Austria). The nPCR for an invariable region of the p104 gene [19] was used to detect T. parva DNA in the extracted DNA test samples along with positive and negative controls. A negative control was a PCR reaction master mix with distilled water as DNA template. Theileria parva Muguga stock genomic DNA from International Livestock Research Institute (ILRI) was used as positive control. For detection of p104 gene for Theileria parva, primary PCR amplification of the p104 gene generated a 496bp fragment using forward and reverse primers IL3231 (5'-ATTTAAGGAACCTGACGTGACTGC-3') and IL755 (5'-TAAGATGCCGACTATTAATGACACC-3'), respectively, as described by Skilton et al. [22]. Briefly, 2× Taq kappa with dye was used with the final volume of 10 µL; containing 5.0 µL 2X Taq kappa with dye, 0.25 µL of each forward and reverse primers, 2.5 µL of PCR grade nuclease-free water and 2.0 µL of the DNA template. The PCR conditions for amplifying the p104 gene in the primary PCR were as follows; initial denaturation at 94 • C for 5 min, followed by 40 cycles of denaturation at 94 • C for 1 min, primer annealing at 58 • C for 1 min, extension at 72 • C for 1 min. Final extension at 72 • C for 9 min and hold at 4 • C. The secondary PCR amplified a 277bp internal fragment located between bases 2784 and 3061 of the p104 gene using forward and reverse primers 5'-GGCCAAGGTCTCCTTCAGATTACG-3' and 5'-GTGGGTGTGTTTCCTCGTCATCTGC-3', respectively, as described by Odongo et al. [19]. The primary PCR product (1.0 µL) was used as a DNA template in a 10 µL reaction volume for the secondary PCR as above. The PCR conditions for amplifying the p104 gene in the secondary PCR were as described above except annealing temperature of 60 • C for 1 min. The secondary PCR products (5 µL) were checked on 2% agarose gel in Tris-acetate-EDTA (TAE) buffer at 125 V and 300 A for 40 min. Band size was determined using a 50 bp DNA in the secondary PCR were as described above except annealing temperature of 60 °C for 1 min. The secondary PCR products (5 µ L) were checked on 2% agarose gel in Tris-acetate-EDTA (TAE) buffer at 125 V and 300 A for 40 min. Band size was determined using a 50 bp DNA ladder (N32361, Biolabs). The DNA bands were visualised under UV light, photographed and documented ( Figure 2). For genotyping T. parva, positive p104 PCR samples and the Muguga cocktail vaccine DNA were used. The 3 micro-(ms2, ms5, ms7) and 11 minisatellite markers (MS3, MS5, MS7, MS16, MS19, MS2, MS25, MS27, MS33, MS34, MS40) [15] were used. Positive (T. parva Muguga stock DNA) and negative (distilled water) controls were run together with test samples. The MS7 and MS19 satellite markers were used to assess whether the T. parva carrier status was due to the vaccine or local strains [10,15,23]. The reaction mixture volumes used in primary PCR were as described above for detection of p104. The PCR conditions were set as follows; initial denaturation at 94 °C for 5 min, followed by 40 cycles of denaturation at 94 °C for 1 min, annealing at 55 °C for 1 min, extension at 65 °C for 1 min, final extension at 65 °C for 9 min and hold at 4 °C. The reaction mixture volumes used in secondary PCR were as described above in the nested p104 amplification. The secondary PCR conditions were set as described above in primary genotyping except using annealing temperature of 58 °C for 1 min. The secondary PCR products (5 µ L) were analysed as described above for p104 detection ( Figure 3). [15] were used. Positive (T. parva Muguga stock DNA) and negative (distilled water) controls were run together with test samples. The MS7 and MS19 satellite markers were used to assess whether the T. parva carrier status was due to the vaccine or local strains [10,15,23]. The reaction mixture volumes used in primary PCR were as described above for detection of p104. The PCR conditions were set as follows; initial denaturation at 94 • C for 5 min, followed by 40 cycles of denaturation at 94 • C for 1 min, annealing at 55 • C for 1 min, extension at 65 • C for 1 min, final extension at 65 • C for 9 min and hold at 4 • C. The reaction mixture volumes used in secondary PCR were as described above in the nested p104 amplification. The secondary PCR conditions were set as described above in primary genotyping except using annealing temperature of 58 • C for 1 min. The secondary PCR products (5 µL) were analysed as described above for p104 detection ( Figure 3). Data Analysis and Interpretation Data generated from screening for T. parva DNA using p104 nPCR and genotyping using satellite markers (SSR) were entered and cleaned using Microsoft Excel. Descriptive statistics were computed at 95% confidence interval. Data from screening were analysed using Chi-square test to determine the association between outcome variable (T. parva positivity) and categorical variable (vaccination status) at statistical significance p < 0.05. Data from genotyping were analysed using GenALEX software version 5 [24], which was used to calculate genetic diversity parameters. This included determining the mean number of alleles (Na), number of effective alleles (Ne) and expected heterozygosity (He). These parameters were used to determine parasite diversity (overall and within the parasite populations from vaccinated and non-vaccinated cattle). Principal Component Analysis (PCA) was used to determine the genetic relationships among T. parva isolates from Muguga cocktail vaccine, vaccinated and non-vaccinated cattle. Analysis of molecular variance (AMOVA) was also used to determine T. parva diversity by estimating the percentage variation within the individual population and between the two populations (vaccinated and non-vaccinated). Data Analysis and Interpretation Data generated from screening for T. parva DNA using p104 nPCR and genotyping using satellite markers (SSR) were entered and cleaned using Microsoft Excel. Descriptive statistics were computed at 95% confidence interval. Data from screening were analysed using Chi-square test to determine the association between outcome variable (T. parva positivity) and categorical variable (vaccination status) at statistical significance p < 0.05. Data from genotyping were analysed using GenALEX software version 5 [24], which was used to calculate genetic diversity parameters. This included determining the mean number of alleles (Na), number of effective alleles (Ne) and expected heterozygosity (He). These parameters were used to determine parasite diversity (overall and within the parasite populations from vaccinated and non-vaccinated cattle). Principal Component Analysis (PCA) was used to determine the genetic relationships among T. parva isolates from Muguga cocktail vaccine, vaccinated and non-vaccinated cattle. Analysis of molecular variance (AMOVA) was also used to determine T. parva diversity by estimating the percentage variation within the individual population and between the two populations (vaccinated and non-vaccinated). The detection of T. parva DNA in the vaccinated or non-vaccinated cattle blood was referred to as 'carrier' status since clinical ECF was not observed in the study animals. The appearance of parasite strain/genotype in the vaccinated cattle that was not similar to vaccine strains was considered as 'breakthrough'. Farm and Farmer Characteristics The cattle sampled (n = 171) were from fifteen farms from Iganga district in Eastern Uganda and they were kept under semi-intensive or zero grazing farming systems. The majority of the cattle sampled were 160 (93.6%) adults, 154 (90.1%) Friesian cross breed and 141 (82.5%) females (Table 1). The detection of T. parva DNA in the vaccinated or non-vaccinated cattle blood was referred to as 'carrier' status since clinical ECF was not observed in the study animals. The appearance of parasite strain/genotype in the vaccinated cattle that was not similar to vaccine strains was considered as 'breakthrough'. Farm and Farmer Characteristics The cattle sampled (n = 171) were from fifteen farms from Iganga district in Eastern Uganda and they were kept under semi-intensive or zero grazing farming systems. The majority of the cattle sampled were 160 (93.6%) adults, 154 (90.1%) Friesian cross breed and 141 (82.5%) females (Table 1). Determination of the Source of Theileria parva Genotypes in ECF Vaccinated Cattle In order to determine whether T. parva carrier status in recently vaccinated cattle was due to the vaccine or local strains, 80 positive p104 nPCR isolates were genotyped; however, only 32.5% (26/80) T. parva field isolates gave amplicons with both the MS7 and MS19. Using MS7 minisatellite marker, 58.8% (10/17) of T. parva DNA from vaccinated cattle amplified alleles of the same size as that amplified by the Muguga vaccine DNA (300 bp). The extracted DNA samples from non-vaccinated cattle (44.4% (4/9)) amplified alleles of the same size as that amplified by the Muguga cocktail vaccine (300 bp) and 55.6% (5/9) of the non-vaccinated cattle showed an allele of 150bp, which also appeared in 29.4% (5/17) of DNA from vaccinated cattle. The allele band size (150 bp) was predominant in local strains and absent in the cocktail vaccine. Some isolates from vaccinated cattle 11.8% (2/17) amplified an allele of 280 bp, which was not amplified in the majority of the isolates from non-vaccinated cattle and also not detected in the Muguga cocktail vaccine strains. The 280 bp allele, when detected in vaccinated cattle, was assumed to be a breakthrough strain. Two alleles (200 bp and 380 bp) were amplified from the cocktail vaccine DNA but were neither amplified from isolates from vaccinated nor non-vaccinated cattle. Many isolates from vaccinated cattle (47.1% (8/17)) had more than one allele, indicative of multiple infections by many parasite strains. A total of 33.3% (3/9) of isolates from non-vaccinated cattle also had two alleles each (Figure 3). Determination of the Genetic Relationship between Theileria parva Isolates from Vaccinated and Non-Vaccinated Cattle In the principal component analysis (PCA) plot, there was no distinct clustering as the parasite alleles scattered throughout the plot. However, most of the parasite alleles Determination of the Genetic Relationship between Theileria parva Isolates from Vaccinated and Non-Vaccinated Cattle In the principal component analysis (PCA) plot, there was no distinct clustering as the parasite alleles scattered throughout the plot. However, most of the parasite alleles from vaccinated and non-vaccinated cattle appeared in the two upper and lower right quadrants. The lower left quadrant contains alleles from vaccinated cattle and Muguga cocktail vaccine ( Figure 5). Determination of the Genetic Variation of Theileria parva Parasites from Vaccinated and Non-Vaccinated Cattle Populations Analysis of molecular variance (AMOVA) calculated from the two parasite populations revealed a large percentage of genetic variation (96%) within individual populations with only 4% being explained by differences among populations (Table 4). Determination of the Genetic Variation of Theileria parva Parasites from Vaccinated and Non-Vaccinated Cattle Populations Analysis of molecular variance (AMOVA) calculated from the two parasite populations revealed a large percentage of genetic variation (96%) within individual populations with only 4% being explained by differences among populations (Table 4). Discussion This study investigated the carrier status and genetic diversity of T. parva in recently vaccinated cattle of 120 days post vaccination. There was an overall T. parva prevalence of 46.8% in cattle sampled from two sub-counties in Iganga district. The prevalence was higher in recently Muguga cocktail-vaccinated cattle (61.2%) than non-vaccinated cattle (27.4%). Previous studies conducted in Tanzania also reported similar results where prevalence of T. parva was higher in vaccinated cattle than non-vaccinated cattle [25,26]. More than half of the recently vaccinated cattle became carriers of T. parva, justifying the importance of ITM since the majority of the vaccinated cattle become carriers and remain immune to ECF [27]. The persistence of carrier state for up to 120 days post vaccination was probably a combined effect of vaccination and the continuous tick challenge to cattle under natural field conditions exposure. This is confirmed by the fact that tick exposure plays a major role in form of an incremental booster effect on the ITM-induced immunity [25]. The T. parva carrier status was not detected in all the recently vaccinated cattle. Failure to detect the carrier state in all vaccinated cattle, however, does not necessarily mean vaccine failure or lack of immunity in an animal, because some cattle after vaccination may clear the parasite and remain immune in absence of a carrier state, a condition known as sterile immunity [10,28]. It is also possible that the parasite may be present in other tissues such as lymph nodes, other than blood [28]. However, vaccination failure can also occur whereby a high dose of long-acting Oxytetracycline is administered during the ITM vaccination and may clear all the infection. Alternatively, the carrier status may wane when not boosted by infected ticks under natural tick challenge. None of the vaccinated cattle showed clinical signs of ECF disease, implying that the vaccine protected them. In the case of the non-vaccinated cattle, the natural infection may have induced a carrier status that elicits innate or natural immunity. The induced carrier status by natural infection is boosted and maintained by feeding infected ticks to the animals, thus providing natural immune protection. In addition, the improved farm management could have influenced the outcome of the lack of clinical symptoms, since farmers adhered to the appropriate use of acaricides to control the ticks during the study period, which may have kept the parasitaemia low. However, ECF is known to be a prevalent disease in the study area. Screening of non-vaccinated cattle using a nested p104 PCR revealed T. parva prevalence of 27.4% (20/73), which was much lower than vaccinated cattle. The non-vaccinated cattle could be carriers of the local strains of T. parva circulating in the field or they could be carriers of vaccine strains since it is possible for the local tick population to become infected with the vaccine parasite strains from vaccinated animals with resultant transmission to the non-vaccinated cattle which share grazing grounds [10]. There was, however, a significant difference in proportion of carriers between vaccinated and non-vaccinated cattle whereby prevalence of T. parva was higher in vaccinated cattle than non-vaccinated cattle, which confirms the impact of live vaccination in inducting a carrier state [26]. In order to answer the question of which strains of T. parva persisted in the cattle population after immunisation with Muguga cocktail vaccine, polymorphisms in the minisatellite markers MS7 and MS19 were studied. The MS7 and MS19 SSR markers were chosen because of their high sensitivity and their ability to differentiate between genotypes of T. parva from vaccine and field strains [16]. Some of the alleles which occurred in the Muguga cocktail vaccine DNA were detected in both vaccinated and non-vaccinated cattle. The appearance of the vaccine alleles in non-vaccinated cattle implies that such alleles may have been transmitted by ticks from vaccinated to non-vaccinated cattle or some of the local strains of T. parva circulating in the cattle population are similar to the vaccine strains. There were some alleles with band sizes of 200 bp and 380 bp which occurred in the Muguga vaccine with MS7 primers but were not detected in isolates from vaccinated cattle. These alleles appearing in the vaccine but were absent in the vaccinated cattle could be T. parva Serengeti-transformed stock and Muguga stocks since the Muguga vaccine is a cocktail of three main stocks; Muguga, Kiambu 5 and Serengeti-transformed [15]. The Muguga and Serengeti-transformed stocks induce a short-term carrier state of up to 87 days post vaccination, whereas Kiambu 5 induces a long-term carrier state [10,13]. The predominant alleles that were occurring in the majority of the non-vaccinated and in some of the vaccinated cattle isolates that are different from the Mugaga vaccine strains could possibly be local strains of T. parva circulating in the field. A study conducted by Oura et al. [10] also found that the vaccinated cattle can become infected with some local strains of T. parva in the field but these do not result in overt disease. However, there were alleles occurring in some isolates from vaccinated cattle that were neither observed in the vaccine nor from non-vaccinated cattle isolates, which could be possible 'breakthrough' strains [13]. Genotyping of T. parva isolates gave 3-5 alleles from vaccinated and 2-4 alleles from non-vaccinated cattle. The presence of numerous parasite allelic components reflects great antigenic heterogeneity and hence broadens the protection induced by vaccination. Individual parasite DNA isolates, however, carried either only one or two alleles each. This reflects a phenomenon that only a limited subset of T. parva genotypes can be transmitted by ticks to cattle, although T. parva genotypes circulating in the field are highly diverse [29]. The mean number of different alleles, mean number of effective alleles and expected heterozygosity were used to compare T. parva diversity among parasite populations. All these parameters were high in vaccinated (mean number of different alleles, 2.214; mean number of effective alleles, 1.978; expected heterozygosity, 0.465) than non-vaccinated cattle (mean number of different alleles, 1.071; mean number of effective alleles, 1.048; expected heterozygosity, 0.259). This implies that T. parva parasite diversity in recently vaccinated cattle was higher than the diversity in non-vaccinated cattle. This confirms the positive role of ITM of vaccination in increasing the diversity of T. parva parasites, which may be enhanced through influence of sexual recombination and continuous tick challenge, which may lead to wider immune protection [30]. The principal component analysis (PCA) depicted close genetic relatedness of T. parva parasite isolates from recently vaccinated and non-vaccinated cattle since they always scattered together. This shows that there was sharing of strains between vaccinated and non-vaccinated cattle. This has been confirmed as a possible scenario in the field where ticks aid in transferring T. parva parasites between cattle that share common grazing ground [10,26]. However, isolates from vaccinated cattle were more closely related to the strains from Muguga cocktail vaccine, as shown by the clustering in the lower left quadrat ( Figure 5). The low genetic variation of 4% among the two populations also confirms that the two populations shared similar genetic composition. The parasite isolates scattered in all the four quadrants, implying that the T. parva parasites were genetically diverse in the field, which could be induced by genetic recombination between field parasites and vaccine parasites [30]. The principal component analysis plot findings were consistent with the high level of variation existing within individual parasite isolates in a population (96%), implying that there was high genetic diversity in each individual population. This is similar to findings by Magulu et al. [26]. During genotyping of field isolates with minisatellite markers MS7 and MS19 to determine whether T. parva carrier status was due to vaccine strains or due to field strains, a Muguga cocktail vaccine DNA (Serial no. ECF MCL 0202 CTT BD DEC 14) was used as reference control. However, it was difficult to point out exactly which strain of T. parva was causing carrier status among the 58.8% (MS7) and 64.7% (MS19) of the vaccinated cattle isolates which were carriers of vaccine strain. This is because a Muguga cocktail vaccine is composed of three stocks of T. parva; Muguga stock, Kiambu 5 stock and Serengetitransformed stock. However, studies have shown that only Kiambu 5 stock alleles can be detected in vaccinated cattle isolates by PCR after 87 days: therefore, its alleles were the only ones expected. For clarity, it may have been best to use individual vaccine cell line stock components as reference controls. However, this was impended by limited finances to purchase the individual vaccine cell lines, as the Muguga cocktail vaccine used was easy to access since the project was involved in the vaccination exercise of cattle in Iganga district. In addition, there is a need to isolate the antigenic region from non-vaccine T. parva strains detected in vaccinated and non-vaccinated cattle and sequence to determine their genetic make-up and compare with the known strains of T. parva to ascertain the possibility of 'breakthrough' strains. Additionally, other tests such as serology (antibody test) should be performed to ascertain whether the strains causing carrier status among vaccinated cattle could be from previous infection, thus carriers already probably existed if the cattle were previously vaccinated, especially for cattle farmers that had not documented properly. Conclusions The study revealed that the prevalence of T. parva was high in both vaccinated (61.2%) and non-vaccinated (27.4%) cattle at 120 days post vaccination. This was brought about by the carrier status of T. parva following Infection and treatment method (ITM) of vaccination using Muguga cocktail vaccine. The T. parva carrier status provided immunity to a high number of vaccinated and non-vaccinated cattle since no clinical symptoms of disease were observed. The T. parva Muguga cocktail vaccine strains were responsible for carrier status in the majority of the recently vaccinated cattle; 58.8% (MS7) and 64.7% (MS19). The non-vaccine strains causing carrier status in vaccinated cattle could be field strains or breakthrough strains. In addition, there were peculiar strains that appeared in 11.8% (MS7) and 23.5% (MS19) of vaccinated cattle, which could likely be breakthrough strains. The high genetic diversity of T. parva was due to continuous natural tick exposure in the field, leading to parasite strain recombination generating more genotypes, which brings about the high parasite allelic diversity in vaccinated and non-vaccinated cattle that share common grazing ground. Informed Consent Statement: Written informed consent were obtained from the cattle farmers prior to commencement of the study. Data Availability Statement: The datasets used to analyse during current study are available from the corresponding author on request.
2023-02-15T16:12:23.264Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "8a911287c0b55ff2dc95c0995075f3045705aa1b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/12/2/295/pdf?version=1676045359", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65d69f675e660ac2ef16de628b9069a7f7b17657", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
88521371
pes2o/s2orc
v3-fos-license
BPEC: An R Package for Bayesian Phylogeographic and Ecological Clustering BPEC is an R package for Bayesian Phylogeographic and Ecological Clustering which allows geographical, environmental and phenotypic measurements to be combined with DNA sequences in order to reveal clustered structure resulting from migration events. DNA sequences are modelled using a collapsed version of a simplified coalescent model projected onto haplotype trees, which subsequently give rise to constrained clusterings as migrations occur. Within each cluster, a multivariate Gaussian distribution of the covariates (geographical, environmental, phenotypic) is used. Inference follows tailored Reversible Jump Markov chain Monte Carlo sampling so that the number of clusters (i.e., migrations) does not need to be pre-specified. A number of output plots and visualizations are provided which reflect the posterior distribution of the parameters of interest. BPEC also includes functions that create output files which can be loaded into Google Earth. The package commands are illustrated through an example dataset of the polytypic Near Eastern brown frog Rana macrocnemis analysed using BPEC. Introduction Phylogeography can be considered the nexus between classical population genetics, phylogenetics and historical biogeography, with much conceptual and analytical overlap with all three, but particularly with population genetics. Phylogeography was born from the integration of population genetics and phylogenetics to work at the micro-macroevolutionary interface (Hickerson et al. 2010), being an evolved discipline that seeks to integrate the genealogical relationships among DNA lineages (sequences) with their geographic distributions to infer historical events that have shaped the contemporary distributions of species and their arXiv:1604.01617v4 [stat.AP] 18 Sep 2018 genetic variation. However, while population genetics, phylogenetics and historical biogeography have witnessed a growth of analytical approaches in recent years, there has been a relative dearth of analytical approaches within the field of phylogeography, with several reviews summarising these (e.g. Knowles 2009;Bloomquist et al. 2010;Hickerson et al. 2010). To place our work into a broader context, we provide a brief summary of the state of the art within the field of phylogeography, but the aforementioned reviews should be referred to for more detail. Historical biogeography seeks to understand the processes that have shaped the evolution of geographic differences among related species (i.e., interspecific process), and may involve timescales that extend back tens of million of years or more. In contrast, phylogeography concerns both the quantification of the geographic structuring of genetic variation within species, and understanding the process that has shaped said structure (i.e. intraspecific process). Thus phylogeographic analyses typically involve time-scales that don't extend back more than a few million years. A challenge for phylogeographic analysis is to simultaneously account for evolutionary processes over spatial and temporal dimensions, and perhaps for this reason the phylogeographer's toolkit is a mixed bag of approaches encompassing various objectives within this framework. Some population genetic methods find relevance in phylogeography, precisely because they do not use geographical information explicitly, but rely on population genetics modelling to infer the geography of structure. For example, STRUC-TURE (Pritchard et al. 2000) infers population structure purely from genotype data through a Latent Dirichlet Allocation model. Population subdivisions are assessed on the basis of multi-locus allele frequencies which are directly learnt from data. More recently, Jombart et al. (2010) developed DAPC, a principal-components alternative to STRUCTURE which can computationally efficiently deal with large amounts of data. In these approaches one describes genetic groupings in the absence of spatial information, onto which phylogeographic inferences can then be conditioned. Fully model-based extensions of spatially-explicit inferences of population structure such as GENELAND (Guillot et al. 2005) and Cheng et al. (2013) assume that the spatial domain occupied by the inferred clusters can be approximated by a small number of polygons based on Voronoi tessellations. Drawing inferences about these cluster domains (and thus about cluster membership) amounts to inferring the location and cluster memberships of the polygons. Finally, recent approaches such as Jay et al. (2015) introduced spatially-dependent cluster membership probabilities through a regression model. These approaches use multilocus genotype data for the inference of spatial genetic structure, and therefore the absence of a coalescent framework limits inferences across the temporal dimension. Methods that use the evolutionary relationships among alleles for phylogeographic analysis open the door for jointly investigating the spatial and temporal dimensions of genetic relatedness among individuals. Early phylogeography relied upon qualitative assessments of the geographic relationships within a gene genealogy, together with estimated dates of gene tree branching events. In this approach demography was directly inferred from the phylogenetic relationships of alleles, with limited importance given to the potentially confounding effects of coalescent stochasticity (Hickerson et al. 2010). Such stochasticity could give rise to similarly probable alternative demographic explanations for a given data set. To address this, simulation-based statistical methods based on coalescent models for parameter estimation have emerged giving rise to statistical phylogeography (Knowles and Maddison 2002;Knowles 2009) allowing for testing among competing demographic models. With regard to the joint analysis of the genealogical and spatial relationships of DNA sequences, we are only aware of two implementations to date. (Lemey et al. 2009) developed a fully model-based Bayesian phylogeographic inference framework, assuming a diffusion model for the geographical migration of nodes on a phylogenetic tree, so that evolution and migration events occur in a continuous-time framework. More recently, Guindon et al. (2016) modelled spatial distribution as a gradual dispersal across a continuous landscape. Here we present a novel R (R Core Team 2017) package available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=BPEC which automates Bayesian Phylogeographic and Ecological Clustering (BPEC) analysis (Manolopoulou et al. 2011;Manolopoulou and Emerson 2012). BPEC is a model-based approach which assumes that population substructure is the result of individuals migrating into a new area (i.e. dispersal). BPEC differs from the methods of Lemey et al. (2009) in that it explicitly models geographical ranges, assuming that sampling localities are random samples from the entire landscape. In contrast to the continuous approach of Guindon et al. (2016), it addresses the phylogeographic structure by inferring geographically structured clusters of DNA sequences as the result of distinct colonisation events, while also admitting a model for the evolutionary history. Here a cluster is defined as a subnetwork of sequences within the haplotype tree that are geographically aggregated and have similar ecological characteristics. BPEC performs full Bayesian inference, which means that it provides an entire posterior distribution over phylogeographic clusterings; although this comes at a computational cost, the ability to provide uncertainty measures is valuable in terms of understanding the impact on scientific hypotheses of interest. The key function of BPEC inputs non-recombinant DNA sequences and geographical locations, as well as any additional covariates available, such as temperature or phenotypic characteristics, in order to identify clusters that are consistent with migration. The results of the analysis provide estimates on the number of migration events, the geographical distribution of the clusters, ancestral locations and clustered tree structure. Aside from providing estimates for the quantities of interest, BPEC also provides measures of uncertainty of the conclusions and functions for post-processing. Finally, BPEC is supplemented with various visualization tools interfacing with geographical mapping resources to aid interpretation. In Section 2, we present the BPEC model, followed by the corresponding Bayesian computation methods in Section 3. Section 4 describes an example dataset of the eastern lineages of polytypic Near Eastern brown frogs, Rana macrocnemis (Boulenger, 1885), from the Caucasus region (Tarkhnishvili et al. 2001), and the R user interface is presented in Section 5 through the analysis of the example dataset. The output is interpreted in Section 6 and the paper concludes with a short discussion in Section 7. Model The aim of BPEC is to combine sequence data S with geographical and (optionally) ecological data Y for demographic inference regarding the geographic and (optionally) ecological structuring of genetic variation and thus potential geographical or ecological limitations to gene flow. To achieve this aim BPEC combines an evolutionary model for the genealogical relationships among sampled DNA sequences together with a geographical model representing dispersal events forming clusters into a fully model-based framework. Haplotype tree model Approaches to model and estimate the evolutionary relationships among DNA sequences range from simple and elegant, such as the vanilla coalescent (Kingman 1982) to complex with intractable likelihood forms (Cornuet et al. 2014). Questions such as the validity of a constant (or effectively constant) population size, independent nucleotide mutations, constant mutation across sites, time-dependence, presence of natural selection pressure, all play a role in defining an appropriate evolutionary model and have led to a variety of extensions of the basic model (Wakeley 2013;Hein et al. 2004). In our case, typical datasets are expected to vary from a several hundred to no more than several thousand nucleotides, with low levels of polymorphism that typically characterise intraspecific data sets. As a result, the nucleotide data are noisy and often too weakly informative to allow for very complex models. The evolutionary relationships among a sample of DNA sequences can be represented in one of two ways: a coalescent tree or a haplotype tree or network. A coalescent tree is plotted against time and thus explicitly characterizes the most recent common ancestor. An example of a coalescent tree with mutations mapped on is shown in Figure 1 where tips represent observed sequences and black circles indicate mutations, and the timing of the most recent common ancestors (MRCAs) among sequences is represented by branch lengths. In contrast, haplotype trees ( Figure 2) summarise mutation differences among sampled sequences, so only implicitly carry information about time. To infer the root haplotype within a haplotype tree an evolutionary model is needed, but such models are not readily available. However, models such as the coalescent with mutations are available (Ethier and Griffiths 1987). There are two main tree representations of evolutionary histories. The first is via a coalescent tree (with or without mutations); the second is via cladograms, haplotype trees or networks. A coalescent tree is plotted against time and thus explicitly characterizes the most recent common ancestor. An example of a coalescent tree with mutations is shown in Figure 1: the tips represent observed sequences and black circles indicate mutations. As time progresses, both mutation and coalescence events occur. In contrast, haplotype trees are plotted against number of mutations, so only implicitly carry information about time. The haplotype tree corresponding to the coalescent tree of Figure 1 is shown in Figure 2. Given a haplotype tree, an evolutionary model is subsequently needed to allow us to draw inferences about the root haplotype within a given tree. However, probabilities of evolutionary histories over rooted haplotype trees are not readily available; models such as the coalescent with mutations are, instead, available (Ethier and Griffiths 1987). A subtle complication derives from potential tree unidentifiability due to repeated observations of the same haplotype that are to be expected when either or both the mutation rate and number of sampled nucleotides is insufficient to ascribe unique variation to all sample haplotypes. As an example, observations 4 and 6 in Figure 1 correspond to the same haplotype, meaning that the 2 observations could be switched without having any effect on the likelihood of the tree. Observations may, however, be distinct with respect to the geographical or ecological information associated to each one. Aside from identifiability issues, exploring the space of equivalent trees requires cycling through a complex combinatorial object which quickly becomes computationally cumbersome. Collapsing sequences into haplotypes allows us to get around this issue, reducing the space of possible trees as exemplified in Figure 2 where sequences 4 and 6 from the coalescent tree ( Figure 1 In order to draw inferences about the haplotype tree, approaches can be fully model-based (Felsenstein 1983;Huelsenbeck and Ronquist 2001;Drummond et al. 2012), parsimony-based through the underlying tree (Rzhetsky and Nei 1993;Desper and Gascuel 2002), or purely phenetic such as neighbour-joining or median-joining (Atteson 1999;Gascuel and Steel 2006). BPEC combines parsimonious approaches within a model-based framework. Although an infinite set of haplotype or coalescent trees could be consistent with the sequence data S, BPEC uses relaxed parsimony to reduce it to a finite set of 'plausible' trees Ω represented via a graph (Manolopoulou and Emerson 2012). The relaxed parsimony is defined by a threshold d s representing parsimony relaxation. Briefly, haplotypes are connected by an edge if they are a single mutation apart. When two groups of haplotypes are disconnected (with minimum mutation distance d min ), then any connection path with length up to d min + d s is considered. The exact details of how to obtain Ω from S for a given d s can be found in algorithm A of Manolopoulou and Emerson (2012). This algorithm constructs a set of 'realistic' trees by cumulatively adding intermediate sequences following a relaxed parsimony assumption defined by the user-specified parsimony relaxation parameter d s . In general, larger values of d s (up to a maximum value) yield more inclusive (and hence realistic) sets Ω, but the choice of d s is often limited by computational power. For a fixed d s , this algorithm inputs the DNA sequences at hand, and outputs a sequence network, including loops. The true haplotype tree is then assumed to be one of the minimum spanning trees of this graph with equal probability and can be obtained through the breaking of loops. A haplotype tree encodes less information than a coalescent tree with mutations. Firstly, a haplotype tree only encodes time through number of mutations. Secondly, it does not automatically define an ordering of events, starting from a root down to tips. Even a rooted (i.e., one where the ancestral haplotype is specified) haplotype tree imposes only a partial ordering to the set of past mutation and coalescence events. Calculating probabilities over rooted haplotype trees therefore requires summing over all possibilities and orderings of past events given a temporal model; an example of possible orderings is shown in Appendix 8.1. We denote a temporal ordering of events as O, where O S r,T denotes the set of all temporal orderings consistent with data S given a root r and tree T and assume that any temporal ordering of events is equally likely a priori. Conditionally on observed data (which restricts the possible trees to the space Ω) this prior corresponds to a discrete uniform distribution over Ω and provides the following posterior probabilities for the root r and tree T : where |·| denotes the size of the set. Similarly, This model naturally takes into account the total number of combinations of mutational and coalescence events. Note that this model disregards the relative probability of coalescence versus mutation, essentially assuming that at every time point either are equally likely. The model can be extended to introduce a mutation rate θ (at the expense of computational complexity) which is simultaneously learnt and is used to refine the posterior probabilities of each tree. Although the haplotype tree model described provides a way of assigning posterior probabilities of haplotypes being ancestral, these need to be associated to sampling locations in order to infer the most ancestral location. BPEC assigns probabilities to each location based on the haplotypes observed in each. For each posterior sample, if the inferred root haplotype is observed, then each observed sequence that corresponds to that haplotype contributes equally to a location being ancestral. In other words, each location will be inferred to be ancestral with probability equal to the proportion of root haplotypes that were sampled in it. If the inferred root haplotype is not observed (i.e. extinct or unsampled), then the oldest observed haplotypes derived from the inferred root are considered equally likely to be the 'most ancestral' and thus each observation of one of these haplotypes contributes equally to the probability of each sampling location being ancestral. An example of this is shown in Figure 3. An important feature of this approach is that the probability of each location being ancestral depends on the proportional representation of each haplotype. This is to circumvent issues of wide sampling variability across locations. Figure 2. In the left-hand panel, the root haplotype (shown in grey) is observed and thus any location will be inferred to be ancestral according to the proportion of observations of the inferred root haplotype 4. In the right-hand panel, the inferred root haplotype is not observed and the two equally divergent descendant haplotypes (shown in grey) are then used to infer ancestral locations as a function of the proportion of copies of either haplotype in a given location. Clustering model The two main requirements to infer migration events for a given tree are: (i) a model for constructing constrained clusterings conditionally on a haplotype tree, and (ii) a model for the distribution of data within each cluster. A key assumption in our model is that new clusters are formed through the migration/dispersal/colonisation of a single individual (haplotype) founding a new geographically distinct cluster (De Iorio and Griffiths 2004a,b). All subsequent descendants of this founding haplotype belong to the new cluster, unless they migrate again. Given an inferred tree representing the genealogy, possible clusterings of the data are thus constrained by the tree while at the same time informed via the geographic distribution (and optionally ecological data) of the observations for each individual. Figure 4 provides an illustration using the hypothetical coalescent tree of Figure 1. The coalescent tree determines a set of constrained clusterings which are feasible through migration events. For example, observation 2, 3 and 6 in Figure 4 could have formed a single cluster together, but 6 and 7 could not. The corresponding constrained clusterings defined on the collapsed haplotype tree are slightly less intuitive as repeated observations of the same haplotype (node) can belong to different clusters. In the collapsed haplotype network shown in Figure 5, all clustered nodes must be directly connected within their cluster. For simplicity, we shall refer to haplotype 4/6 as haplotype 4 from now on. Formally, conditionally on a haplotype tree, the clustering model is defined as follows. We denote the set of distinct haplotypes in the sequence set S (of size N ) as H = {H 1 , . . . , H n } with size n, and use |H i | to denote the number of copies of haplotype H i observed in the data. Let K denote the number of migrations, which is itself allowed to vary. Each migration event is associated with a haplotype which migrated, denoted as m = {m 1 , . . . , m K }. Although colonisation events happen in order, here we do not model the events temporally, so the order of m is irrelevant. Note that the haplotypes in this list need not be distinct, as two different 1 4 7 2 3 6 5 Figure 4 where colour corresponds to cluster and size of node to the number of individuals sampled with each sequence. Edges represent single effective mutations and black dots represent unobserved intermediate haplotypes. copies of the same haplotype may have colonised, or a single sequence may have colonised twice. The set of colonies/clusters with which each migrating haplotype is associated is denoted by C(m k ), k = 1, . . . , K; in the example above, K = 3, m = {4, 4, 4} and C(4) = {blue,yellow,pink,green}, since all migrations were of the same haplotype. This means that, in general, the sample space of m has size n K /K!. Conditionally on a set of migrating haplotypes, the space of constrained clusterings is then such that all observations of that haplotype must belong to one of the corresponding clusters C(m k ) (i.e., either the original cluster or the one which was a result of migration). Equivalently, all adjacent haplotypes must also belong to one of these clusters (unless one of them also migrated, and so on). Once the clustering has been established, the geographical and ecological observations Y i , i = 1, . . . , N in each cluster c i are Normally distributed with mean µ c and variance Σ c , such that where c i denotes the cluster of observation i. To complete the model, prior distributions are defined on the model parameters. The number of migrations is assumed to be uniform between 0 and K max (corresponding to 1 and K max +1 clusters). Other prior distributions (e.g., Poisson) could be used instead, but we do not explore this direction here. The |m k | observations of each of the migrating haplotypes m k are each assigned uniformly to one of the clusters in C k , similarly with the deg(m k ) clades connected to it (where degree represents the number of edges connected to node m k ), so the prior probability of each clustering conditional of the migrating haplotypes (and their clusters) is simply a combinatorial coefficient. The means and variances of each clustering are assigned different priors for the longitudelatitude versus the remaining covariates: and we assume that any off-diagonal entries of Σ k in dimensions 3 : d are 0. By convention, the first two coordinates of Y always represent longitude and latitude, normalised such that the mean of both is zero and the average (between longitude and latitude) variance 1, using the same normalising factor for both longitude and latitude to reflect the isotropy of the two dimensions. Note that longitude and latitude are treated as Euclidean coordinates, which means that datasets spanning a very large region may result in distorted results. The remaining coordinates correspond to environmental or phenotypic characteristics (if available), which are normalised to sample mean 0 and marginal variance 1. We impose uncorrelated environmental/phenotypic characteristics by forcing the covariance matrices to be 0 on any off-diagonal entries except for the one correponding to longitude-latitude. This is because the concentration parameter γ of an Inverse Wishart needs to be at least as d Σ + 2 in order to be well-defined, where d Σ is the dimension of the covariance matrix modelled. In our case, if we model the entire covariance matrix through an Inverse Wishart, γ would be forced to a minimum of 3 + d, which (for moderate d) corresponds to low prior variance and can be too restrictive. We thus restrict the Inverse-Wishart prior for the geographical covariates only and place independent Inverse-Gamma priors on the remaining diagonal elements of Σ k . Perhaps the most important prior distributions here are the ones relating to the shape Σ of each cluster, namely the parameters of the Inverse-Wishart prior γ and ψ, as these define the prior belief of the spread of each cluster. Although the parameter γ is allowed to vary and hence can adapt depending on information from the data, nevertheless too large or too small values of ψ (corresponding to a prior belief of geographically widely spread versus tiny clusters) will have an impact on the posterior inference. The default setting in BPEC is that clusters are a priori expected to span about 30% of the total range. Bayesian computation The entire model consists of the model of the root and tree posterior distribution together with the distribution of the migration and clustering model. Inferences are drawn simultaneously, such that we can borrow information from the tree to the migration parameters and vice versa. The complexity of this phylogeographic model implies that drawing inferences about the posterior distribution of the parameters is challenging. We proceed via tailored Markov chain Monte Carlo (MCMC) using a combination of adaptive proposals, auxiliary variables and data-driven proposals. This is especially crucial for the clustering, which here is restricted to tree-based clusterings, since the space of clusterings is vast and discrete without natural local moves. Markov chain Monte Carlo sampler The Markov chain Monte Carlo sampler alternates between updates of the tree parameters and the clustering parameters. We adopt a scheme whereby updates of parameters are performed at varying frequencies, reflecting the difficulty of accepting or rejecting a move and allowing both local and global exploration of the parameter space. Four different updates are described below, which are then combined into a sampler at varying frequencies. The tree T , root r, colonised haplotypes m, clustering c and cluster means µ and variances Σ. 1. Conditionally on a given tree T , propose to change the root along with a mutation history. Accept or reject the proposed root and mutation history. 2a. Conditionally on the root r, propose a new tree T and mutation history uniformly. 2b. Conditionally on the proposed tree T propose to change one of the colonised haplotypes in m. 2c. Conditionally on the colonised haplotypes, propose to change the set of clusterings c along with the means µ and variances Σ of each cluster. 2d. The proposed tree topology and history, root, clustering and means and variances are accepted or rejected together. However, steps (2a), (2b) and (2c) need not all occur at the same time. Specifically, steps (2a-b) are only performed (roughly) every 5th iteration. The precise mechanics of the sampler are not shown here; some additional technical issues are discussed in Appendix 8.2. Technical considerations Almost as important as how the method works is when it is sound to use (or not). Since the package is intended to be used primarily by practitioners, one of the aims of this paper is to clarify what types questions BPEC can potentially answer as well as what underlying assumptions are necessary and implicit. Bayesian Phylogeographic and Ecological Clustering assumes that non-recombinant (typically mtDNA) data are available from a set of geographical locations (in the form of longitude/latitude). The haplotype tree model takes a relaxed parsimony approach which may be unreliable under conditions of mutational saturation or excessive homoplasy. BPEC is programmed to produce appropriate error messages to inform the user in such cases, but will not be foolproof. The geographical model assumes a constant population size and migration rate, and thus as real data departs from this model the inferences from BPEC are expected to depart from the true demographic history. However, simulation analyses will be required to address this quantitatively. Also, the clustering and migration model does not explicitly take into account geographical distance between clusters. It simply separates observations in distinct geographical clusters. Therefore, it is possible for a migration to result in two distant clusters. Notice that we assume a uniform prior over the number of migrations K. In general, K migrations can lead to up to K + 1 clusters; often, however, some of these may be empty, resulting in fewer 'effective' migrations. The uniform prior applies to the total number of migrations rather than the number of effective ones, whereas the posterior distribution over the number of migrations actually refers to effective migrations. This somewhat convoluted approach is preferred because enumerating scenarios of different effective migrations is computationally cumbersome. As discussed earlier, an important consideration when using BPEC for the inference of ancestral areas is the distribution of haplotype observations within each location. Since ancestral area probabilities are determined through the proportion of inferred ancestral haplotypes, a site with, for example, a single haplotype which happens to be ancestral, will always result in high probability of being ancestral. Consequently, ancestral location probabilities should be more reliable when there are more observations per location. It also frequently occurs that uncertainty about the root haplotype is high, where a range of different haplotypes carry significant posterior mass. As long as no convergence errors are reported, this is not a convergence issue but merely reflects uncertainty in the data. One of the limitations of Markov chain Monte Carlo methods is that the samplers require a large number of iterations to satisfy convergence diagnostics. The convergence diagnostics in BPEC are split into two pieces: convergence of the clustering and convergence of the root haplotype. If either of these two pieces has not converged, the sampler will return an error to that effect. Ideally, both pieces should satisfy the convergence diagnostics; however, it is sometimes the case (especially when dealing with a large number of clusters) that, for any reasonable number of MCMC iterations, the diagnostics fail. In these cases, inferences should be taken with caution. BPEC cannot deal with unknown nucleotides and will ignore any nucleotide sites at which at least one of the sequences has an ambiguity code. This means that ambiguous nucleotides result in information loss. On the other hand, BPEC will treat true alignment gaps '-' as a 5th character such that a deletion/insertion is treated as a type of mutation. Care should be taken in the interpretation of the output when lots of missing nucleotides are present, since this could lead to significant loss of resolution (Joly et al. 2007). Brown frog data The BPEC package will be implemented on a brown frog dataset which will be used throughout the next few sections for illustration. We used 40 mitochondrial cytochrome b sequences of Near Eastern brown frogs -Rana macrocnemis (Boulenger, 1885) to demonstrate a combined phylogeographic and ecological analysis with BPEC. Previous molecular analyses (Tarkhnishvili et al. 2001;Veith et al. 2003b,a) have attributed range expansion and fragmentation triggered by Pleistocene glaciation cycles as drivers of demographic change within the brown frog. R. macrocnemis is represented by a number of recognised subspecies across its entire range, and here we focus on two widespread subspecies that are geographically distinct in the southwest Caucasus and separated by a narrow transition zone (Tarkhnishvili et al. 2001). The nominotypic R. macrocnemis macrocnemis (Boulenger, 1885) is found on the forested slopes of the Trialeti ridge northwest and in montane meadows on both sides of the Great Caucasus, while R. macrocnemis camerani (Boulenger, 1885) occurs in southern Georgia on the Javakheti plateau (Tarkhnishvili et al. 2001). A map of the sampling localities, indicating proportion of R. macrocnemis macrocnemis versus R. macrocnemis camerani, is shown in Figure 6. BPEC was applied to investigate geographic and environmental aggregation of haplotypes within R. macrocnemis. We included predictive environmental and climate covariates (topographic and land cover conditions, and annual trend patterns of temperature, precipitation and seasonality) to examine environment and geography as agents for the structuring of genetic variation. Grid-based attribute values of a set of predictor variables associated with each cell position of the map layers were subsequently extracted at the point locations of the georeferenced mtDNA haplotypes from six raster grids by means of the extract function of the raster package (Hijmans 2015): four bioclimatic variables (Annual Mean Temperature (degrees Celsius x 10), Temperature Annual Range (100 × standard deviation of monthly mean temperature), Annual Precipitation (in mm), Precipitation Seasonality (Coefficient of Variation (CV)), from the bioclim database available in the dismo package (Hijmans et al. 2016), altitude in meters as a proxy for a digital elevation model and the land cover map (GLC2000) from the subdomain land cover/land use housed under http://worldgrids.org/ global environmental layers. We re-classified the total information of the land cover map into two classes of forested and non-forested areas to introduce a simplistic landscape dependent habitat variable (COV). These six variables altogether describe climatic, topographic, and land cover conditions that are potentially informative predictors in terms of species distribution. Inputs BPEC takes two main inputs: the set of mtDNA sequences (in NEXUS format) and the set of coordinates and haplotypes observed in each location. Sequences need not be collapsed into unique haplotypes, but labelling of sequences in the NEXUS file and the locations file must be consistent. In order to load these two variables into R from two files called haplotypes.nex and coordsLocsFile.txt (for example), the following commands can be used. For an example of input files, use the files provided through system.file("haplotypes.nex",package = "BPEC") and system.file("coordsLocsFile.txt",package = "BPEC") or see Supplementary materials of the manuscript. The sequences can be loaded using the bpec.loadSeq command. In order to load the file containing the coordinates, covariates and observed haplotypes/sequences of each location, use the bpec.loadCoords command below. Use the option header = TRUE when the first row includes variable names. bpec.mcmc() Argument Description maxMig the maximum number of migrations to be considered. In terms of inference, the higher maxMig, the better the results, since more models are considered. However, that comes at a computational cost. We recommend using a low but intuitive value based on the study system to begin an iterative assessment. For example, if using a value of 6 (corresponding to 7 clusters), and if the inference shows significant posterior probability on 7 clusters, increase maxMig and rerun. Similarly, if e.g., a value of 5 is used and convergence diagnostics are not satisfied, but posterior mass seems to be minimal around 4/5 migrations, then one can reduce maxMig to 4 (which will reduce complexity) and re-run. iter the number of MCMC iterations to run the sampler for. By default, two chains will be run from different starting values. The value of iter is important, as it will determine how long the chains will run for and whether convergence (both in terms of the root haplotype as well as the clustering) diagnostics will be satisfied. A value of 100,000 is usually reasonable to start with; if convergence diagnostics are not satisfied, or if the post-processing plots look inconsistent, increase iter by a factor of 10 (and so on). ds the parsimony relaxation parameter d s . We recommend starting with d s = 0 and increasing once reasonable values of iter and maxMig have been established. Note that increasing d s past an (unknown) value d max , which depends on the individual dataset, has no effect on the inference. postSamples the number of posterior samples (per chain) to be saved for posterior summary statistics. We recommend using a value around 1,000. The higher the better for inference, but this comes at a memory storage cost. dims the number of covariates (including longitude and latitude) available. If only geographical data are used (and no environmental or phenotypic information), dims = 2. Otherwise increase as appropriate. In the case of the brown frog dataset, the dimensionality of the data was dims = 8 (geographical dimensions longitude and latitude plus the six additional environmental covariates). We ran the BPEC analysis taking the maximum parsimony level option at ds = 0, increasing up to ds = 3 (to potentially explore more candidate trees) for 1,000,000 iterations (iter) each. No change to the results was observed, since the brown frog haplotypes formed a fully connected tree without missing intermediate haplotypes. Convergence diagnostics of the maximum a posteriori clusterings and root were not violated (i.e., no convergence error message was reported). The output of the function is shown below. Outputs The bpec.mcmc command outputs an R object of class BPEC which can be summarised using generic functions such as plot(), summary() and plot(), as well as accessor functions input(), preproc(), output.tree(), output.clust(), output.mcmc(). The output of each of these accessor functions is shown in Tables 2-6. Visualizations and post-processing As described in the previous section, the Markov chain Monte Carlo sampler returns many different types of outputs. In order to obtain a summarised picture of the inference, a number of visualizations are available through BPEC to aid interpretation. Geographical contour plot The command bpec.contourPlot provides a colour-coded contour plot of the geographical input() Output Description seqCountOrig the number of sequences in the data. seqLengthOrig the length of the input sequences. iter the number of MCMC iterations. ds the parsimony relaxation parameter. coordsLocs the input coordinates (and optional additional ecological measurements) and their corresponding sequence indices. coordsDims the dimension of the input measurements (2 if purely longitude and latitude, +1 for every additional one). locNo the number of distinct sampling locations. locData the coordinates and measurements of each sampled sequence. Table 2: The list of outputs of input(), corresponding to all the inputs and arguments that were provided to bpec.mcmc(). Output Description seq The output DNA sequences of distinct haplotypes, collapsed to effective nucleotide sites (both sampled and missing sequences which were inferred). seqsFile A vector of the numerical labels of each haplotype. seqLabels Correspondence vector for each of the processed observations to the original haplotype labels. seqIndices Correspondence vector for each of the original observations to the resulting haplotype labels. seqLength The effective length of the input sequences, given by the number of variable nucleotide sites which are informative. In other words, if two or more nucleotide sites describe the same subsets of sequences, then they are collapsed to a single informative nucleotide. noSamples The number of times each haplotype was observed in the sample. count The number of output sequences. output.tree() Output Description clado the adjacency matrix for the maximum a posteriori tree in vectorised format. For two haplotypes i,j, the (i,j)th entry of the matrix is 1 if the haplotypes are connected in the network and 0 otherwise. levels Starting from the root (level 0) all the way to the tips, the discrete depth for the maximum a posteriori tree. edgeTotalProb Posterior probabilities of each edge being present in the tree, so that any edge which is not part of a loop will have posterior probability 1. rootProbs a vector of the posterior probabilities that each haplotype is the root of the tree. treeEdges contains the same information as cladoR, but in a different format. The set of edges (from and to haplotypes) of the maximum a posteriori haplotype tree are represented as an edge list of from/to vectors which could be used in the graph and network modelling R package igraph (Csardi and Nepusz 2006) if needed. rootLocProbs a vector of the posterior probabilities of each sampling location being the most ancestral location. If several rows in the file coordsLocsFile.txt correspond to the same geographical location, the first of these will carry the total posterior probability for the location, with the remaining having 0. migProbs a vector of the posterior probabilities of {0 . . . maxMig} migrations. Output Description MCMCparams various tuning parameters used in the MCMC sampler, this is only important for development. codaInput Posterior samples from the two MCMC chains for the cluster means, cluster covariance entries, as well as the root haplotype. Note that, since the number of clusters varies from iteration to iteration, some samples are simply draws from the prior (corresponding to empty clusters). This variable can be loaded directly into the coda package (Plummer et al. 2006) for convergence analysis. clusters superimposed onto a map (provided accurate longitude and latitude coordinates have been provided) using R> par(mar = c(0, 0, 0, 0)) R> bpec.contourPlot(bpecout, GoogleEarth = 0, mapType = 'google', + colorCode = c(7, 5, 6, 3, 2), mapCentre = NULL, zoom = 7) In order to convey not only posterior means but also uncertainty, a set of posterior draws of these contours are plotted using transparency, so that the user can assess the stability of the inference. The sampling locations are also shown on this contour plot, with the top three sampling locations in terms of their probability of being ancestral shown as larger points. The precise posterior probabilities (which may all be low in the presence of uncertainty) of each of the localities being ancestral can be found through output.tree(bpecout)$rootLocProbsR. The colours can be changed through the optional argument colorCode (with default value (7,5,6,3,2,8,4,9)) which controls the colour of the first, second, third cluster etc; if not specified, the default colour scheme is used. There are four options for the argument mapType: 'none' will show the posterior distribution of the clusters against a white background, 'plain' will use the in-built outline R maps, 'google' will superimpose the contours on a map downloaded from Google maps (requires an internet connection), and 'osm' will do the same using OpenStreetMap. The optional arguments mapCentre and zoom allow the user to specify the centre of the map and level of zooming when using the Google maps option. In the case of the brown frog dataset, the contour plot is shown in Figure 7. The posterior mass for the number of clusters strongly concentrates around 2 (as indicated by the output output.clust(bpecout)$migProbs), with the posterior probability of 2 clusters being greater than 0.99. The yellow cluster can be taxonomically aligned to the subspecies R. m. macrocnemis lineage, while the turquoise cluster includes individuals of R. m. macrocnemis from the humid and forested mountain region, and individuals assigned to R. m. camerani from the drier area of the southern treeless mountain steppe habitats of the Javakheti plateau. The contour ellipses overlap in the heart of the geographic transition zone south of the Minor Caucasus. Instead of using the R interface, the contour plot can also be exported into Google Earth primary exchange format using the option GoogleEarth = 1. This will produce a set of files An example of the contour plot for the brown frog dataset using bpec.contourPlot. Each transparent geographical ellipse represents a posterior draw for the geographic centre of the cluster within the 50% level contour of that draw. The 50% contour represents the boundary where probability density of the cluster is 50% of the maximum density (i.e., the centre of the cluster). Solid ellipses represent posterior means. Larger triangles represent most likely ancestral locations. The black jagged lines show the outline of the geographical map of the area. with extension kml which can be loaded directly into Google Earth. Finally, a 'messy' looking plot such as the toy example in Figure 8 either implies poor MCMC convergence or high uncertainty in terms of the clustering. Environmental and/or phenotypic covariates plot In cases where environmental or phenotypic covariates have also been used, posterior draws for the distribution of the covariates within clusters are available through cluster means output.clust(bpecout)$sampleMeans and covariances output.clust(bpecout)$sampleCovs. These can be summarized through posterior medians and 5/95% credible regions, colour-coded using the same coding as the contour plot. To aid plotting and interpretation, the covariate names of each of the columns of coordsLocs are used. The first two (corresponding to longitude and latitude) are automatically ignored in this function. )) The plot produced in the case of the brown frog dataset is shown in Figure 9. Clustered tree plot To visualize the maximum a posteriori haplotype tree, the command bpec.treePlot plots the haplotype tree most supported by the data. The size of each node in the tree represents the number of times each haplotype was observed, black dots corresponding to missing intermediate haplotypes. The thickness of each edge represents the posterior probability that each mutation occurred (thin edges corresponding to mutations with high uncertainty). Observed haplotypes are colour-coded according to their posterior probability of belonging to each cluster. As long as the same colorCode variable is used, the cluster colours correspond to the ones used in the geographical and covariate contour plots. Tree plot on geographical map The tree plot can also be partially visualised geographically through the bpec.geoTree command which superimposes the haplotype tree onto a map through a file that can be loaded into Google Earth. The function uses the igraph package (Csardi and Nepusz 2006) as well as phytools (Revell 2012;Valiente 2010) in order to visualise the network as an interactive tree. Finally, to overlay the tree onto the map, the archived library R2G2 is used (Arrigo et al. 2012;Arrigo 2013). Since haplotypes can be observed in multiple locations, clicking on particular nodes of the tree shows the locations where each copy of the haplotype was found. However, when multiple haplotypes were found in a single location, only one will be displayed, so the bpec.geoTree may not tell the whole story. Also note that only existing tip haplotypes are possible to identify on the map. R> bpec.geo <-bpec.geoTree(bpecout, file = "GoogleEarthTree.kml") Tip haplotypes are connected to a tree by a single branch, internal node haplotypes have three or more connections, whereas branch haplotypes exactly two connections. Analysis of the brown frog data In the case of the brown frog data, the three locations with highest probability of being ancestral are approximately located at the intersection between the yellow and turquoise cluster, shown as larger dots in Figure 7. These three locations correspond to (a) Paravani lake, treeless mountain steppe, 2100m, Javakheti plateau, posterior probability 24%, (b) Tsalka, treeless mountain steppe, close to the southern slopes of the Trialeti Ridge, posterior probability 12% and (c) Cross Mountain Pass, alpine habitat, 2000-2500m, Great Caucasus, posterior probability 10%. However, it is important to condition any conclusions drawn from these inferences on their associated probability values, and in the case of R. macrocnemis it is clear that there is high uncertainty associated with these inferences that is related to the limited information content of the data rather than issues of convergence. Further sampling (more localities and more individuals per locality) may improve posterior probabilities, but it may also be possible to develop BPEC to incorporate outgroup sequences for the inference of ancestral haplotypes (see Section 7), something that should improve posterior probabilities for ancestral areas. Differences within bioclimatic (annual mean temperature, annual range of temperature, annual precipitation, precipitation seasonality) and altitude variables between the two clusters ( Figure 9) are largely due to their mean values rather than their variances. Differences in means are especially apparent for the annual mean temperature (> 5℃ for the yellow clus-ter and around 5℃ for the turquoise cluster), annual range of temperature (high amplitude of variation, typical for mountain climates: CV < 320% for the yellow cluster, > 320% for populations of the turquoise cluster), annual precipitation (higher for populations of the turquoise cluster, nearly 800mm), annual distribution of precipitation (much higher in the yellow cluster, CV > 45% which results in higher variation in the timing and intensity of annual precipitation). Altitude is rather similar around 1500-1800m. Finally, the landscape dependent variable 'open vs. forested habitat' is clearly different for the clusters. These findings suggest that individuals within the sampled area for R. macrocnemis are best described by two geographic clusters of mtDNA sequence variation and that they also differ with respect to specific environmental conditions. These data therefore offer support to the hypothesis that both processes of geographic isolation and divergent selection have contributed to diversification within the group, with the suggestion that taxonomy recognises these entities. As such, the results of BPEC provide specific hypotheses than can be further tested with a more extensive genetic marker based approach for hypothesis testing (see Section 7) . Discussion We have described BPEC, an implementation of the phylogeographic and ecological clustering methods described in Manolopoulou et al. (2011); Manolopoulou and Emerson (2012). We have introduced several visualization and post-processing tools in order to aid data analysis and interpretation, along with details of the significance of different types of output. BPEC will continue to be improved. The main focus of the extensions will revolve around speeding up the convergence of the sampler and improving the approximation stemming from the auxiliary tree parameter. We recommend caution when extrapolating conclusions from BPEC output, and as is the case for many software packages it is important that users do not take a black box approach. Users should condition their conclusions on the biology of their organism of interest, the completeness of their sampling, and the idiosyncrasies of their data (e.g. the proportion of unsampled haplotypes). In terms of extensions to the actual model, more generic evolutionary models for subdivided haplotype trees will be gradually introduced, such as the one recently developed by De Maio et al. (2015). Similarly, explicitly modelling the migration process as a spatial transition will allow additional information from the spatial distribution to inform the tree and vice versa. As currently configured, BPEC is best treated as a tool that can potentially reduce model space for subsequent hypothesis testing. As an example, in the case of the brown frog Rana macrocnemis BPEC identified geographic clusters of mtDNA sequence variation that are associated with differing environmental environmental conditions that could underpin divergent selection. Thus, BPEC presents evidence that both neutral and selective processes are driving diversification within the group. However, as BPEC is limited to the analysis of a single DNA sequence locus, inferences should not be extrapolated to ultimate biological/ecological conclusions. BPEC should lend itself to the integration of inferences across multiple loci within a species, and this is an area that we are investigating for future updates. Of particular relevance is the increasing accessibility of reduced genome sequencing data (McCormack et al. 2013) that can provide up to tens of thousands of loci per individual. Filtering for loci characterised by multiple SNPs could provide a rich data source for a multi-locus BPEC implementation. Analogous to a single species multi-locus analysis, it should also be possible to integrate across different species sampled from the same locations within BPEC. Such an approach would provide for quantitative measures for comparative phylogeography, and this will also be explored for future updates of BPEC. Outgroup sequences can potentially directly inform about the probability of a haplotype being the Most Recent Common Ancestor (MRCA) of a set of sequences, and future versions of BPEC will explore the possibility of incorporating outgroup sequences to for this purpose. Sequences immediately derived from an inferred MRCA are also expected to provide some information regarding ancestral areas, and integrating information across the MRCA and sequences immediately derived from it will also be explored. Finally, we are investigating whether we can extend the applicability of BPEC to the analysis of geographic population structure derived from vicariant processes -i.e., where populations become isolated and thus initially share genetic variation, but diverge through time through lineage sorting effects and the accumulation of new population specific mutations. BPEC should be applicable to the examination of genealogy among such closely related populations under the evolutionary model of population splitting. In the absence of opposing gene flow among populations, all populations will eventually become diagnosable as descending from a single haplotype unique to that population (lineage sorting). This diagnostic is equivalent to the pattern derived from a colonisation event, and as such it must be borne in mind that clusters defined by BPEC may indeed have a vicariant origin. Incorporating a vicariance model into BPEC may prove challenging, but it would (i) facilitate the detection of more subtle geographic structuring than that derived from the dispersal model, and (ii) provide a more realistic model of phylogeographic structure. Temporal orderings Suppose the haplotype tree is given by the top tree of Figure 11 (Manolopoulou and Emerson 2012). For ease of exposition, the numbers on the nodes here represent the sample sizes of each haplotype rather than the label of each haplotype. Nodes without a number correspond to haplotypes which haven't appeared yet. At first one sequence is present, the ancestral sequence, which replicated into two (the first event is always a replication, otherwise that haplotype would disappear). Then one of those two identical sequences may replicate again to give us a total of three (or could have mutated to give a new haplotype). One of those three then mutate to give us the intermediate haplotype, which in turn here replicates and then mutates (and goes extinct) to give us the right-hand leaf. Finally, the intermediate haplotype mutates again to give us the left-hand leaf, which then also replicates to give another copy of itself. Simulating a temporal ordering implies that, starting with the ancestral sequence, we specify a series of replication and mutation events which occurred by mimicking evolution, eventually resulting in the observed haplotype tree. A possible series of events is shown in Figure 11 through 7 timepoints, where node numbers indicate number of copies of each haplotype. Notice that, if the root node had replicated further, we would have had three copies of the root haplotype. Although in theory this could have happened, with one of the copies eventually becoming extinct, we do not take into account any such scenarios, instead we only account for the observed sequences. Additionally, it would not have been possible for the intermediate haplotype to mutate after Step 3 above, since then it would disappear from the ancestral sequences, and another mutation would not have been possible. Computational issues branches were given higher weight than others. In BPEC we tweak the proposal distribution of Manolopoulou et al. (2011) by introducing an auxiliary variable w c , representing the weight of the previous clustering in the MCMC sampler. Rather than simply allocating each observation branch to one of the existing clusters simply by assessing the fit of each branch to each of the clusters, we assign it to the same cluster as the previous iteration (where possible) with probability w c . This favours clusterings similar to the previous iteration, thereby ensuring that local moves are proposed more frequently. Since w c is an auxiliary variable, it is accepted/rejected together with the proposed parameters, so the sampler automatically chooses a value of w c that is reasonable. Label-switching In order to draw cluster-specific inferences, cluster labels need to be assigned for every posterior sample available. This is known as the label-switching problem (Stephens 2000a,b;Papastamoulis and Iliopoulos 2010) and it is especially challenging in the case of a variable number of clusters. Here we take a pivoting approach to assign cluster labels on-line (i.e., without the need of post-processing). The algorithm works as follows: 1. During burn-in of the first chain, record the cluster labels of the posterior sample with the highest value of the posterior density, denoted by c * . 2. Once this clustering is fixed, subsequent labels of the posterior sample of the set (µ, Σ) are chosen such that p (Y | µ c * , Σ c * , c * ) is maximised. In other words, labels of the set of means and covariances are chosen such that the likelihood relative to (approximate) maximum a posteriori clustering c * is maximised. Hashing In contrast to coalescent trees, which are binary and can be represented simply by the pairs of subsequent coalescence events, haplotype trees do not have shorthand representations. Instead, a standard way to represent a haplotype tree is through its corresponding graph adjacency matrix. However, keeping track of posterior samples of trees requires storing entire matrices at each iteration of the sampler, which creates a memory bottleneck. In our case, we can take advantage of the fact that not all adjacency matrices are possible; most edges are either certainly present or absent as determined by Ω. Uncertainty only arises through edges that are part of a loop in the network, so each tree is characterised by the set of deleted edges. Trees are then reduced to vectors of length n loop with integer entries. Standard hashing techniques can thus be used to store the number of times each tree (i.e., each integer vector) appears in the MCMC posterior samples. Hashing algorithms allow us to represent integer vectors by an single integer. In our case, we can store the index of the edge deleted from each loop at each iteration of the MCMC, keeping track of them via the 'hashing index' of the entire vector. Hash functions create a short (as short as possible) address book where each of these numbers is stored in a specific page, in such a way that it can easily be retrieved (see Knuth 1998).
2018-09-18T11:46:44.000Z
2016-04-06T00:00:00.000
{ "year": 2020, "sha1": "7725500e31af63bc32637cc9ad21fcfd83e2ad8d", "oa_license": "CCBY", "oa_url": "https://www.jstatsoft.org/index.php/jss/article/view/v092i05/v92i05.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d30d58ecdb1c7e71dc31067c6f9857be09348460", "s2fieldsofstudy": [ "Biology", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
14513387
pes2o/s2orc
v3-fos-license
Unequal division in Saccharomyces cerevisiae and its implications for the control of cell division The budding yeast, Saccharomyces cerevisiae, was grown exponentially at different rates in the presence of growth rate-limiting concentrations of a protein synthesis inhibitor, cycloheximide. The volumes of the parent cell and the bud were determined as were the intervals of the cell cycle devoted to the unbudded and budded periods. We found that S. cerevisiae cells divide unequally. The daughter cell (the cell produced at division by the bud of the previous cycle) is smaller and has a longer subsequent cell cycle than the parent cell which produced it. During the budded period most of the volume increase occurs in the bud and very little in the parent cell, while during the unbudded period both the daughter and the parent cell increase significantly in volume. The length of the budded interval of the cell cycle varies little as a function of population doubling time; the unbudded interval of the parent cell varies moderately; and the unbudded interval for the daughter cell varies greatly (in the latter case an increase of 100 min in population doubling time results in an increase of 124 min in the daughter cell's unbudded interval). All of the increase in the unbudded period occurs in that interval of G1 that precedes the point of cell cycle arrest by the S. cerevisiae alpha-mating factor. These results are qualitatively consistent with and support the model for the coordination of growth and division (Johnston, G. C., J. R. Pringle, and L. H. Hartwell. 1977. Exp. Cell. Res. 105:79-98.) This model states that growth and not the events of the DNA division cycle are rate limiting for cellular proliferation and that the attainment of a critical cell size is a necessary prerequisite for the "start" event in the DNA-division cycle, the event that requires the cdc 28 gene product, is inhibited by mating factor and results in duplication of the spindle pole body. place at integral multiples of a particular cell mass. Killander and Zetterberg (19) observed that mouse cells in culture had a smaller variation in mass and a larger variation in age at the onset of DNA synthesis than they exhibited at division "suggesting that the intiation of DNA synthesis is more related to the mass than to the age of the cell." Other observations suggesting a similar relationship between cell size and the onset of DNA synthesis have been made for Chinese hamster cells (20) and human lymphoid cells (36). However, Fox and Pardee (9) failed to find such a relationship in Chinese hamster ovary cells. A particularly enlightening example is provided by the yeast Schizosaccharomyces pombe where the control of DNA synthesis by cell size is cryptic under conditions of rapid growth but can be demonstrated upon nutritional deprivation (25). S. cerevisiae permits a rather dramatic demonstration of the relationship between growth and division. During nutrient starvation parent cells produce extremely small daughters which result from the unequal distribution of mass between parent and bud, a situation not usually encountered in organisms that divide by binary fission. The interval of time from the addition of fresh nutrients to starved cells until the initiation of a new cell cycle is inversely related to the initial size of the cell because all cells grow to approximately the same size before initiating a new cell cycle (18). Other experiments demonstrate that cycles once initiated can be completed with little or no net growth, a result indicating that the growth requirement is unique for a particular step in the cell cycle. The event in the S. cerevisiae cell cycle that is uniquely sensitive to cell size has been located at or before the step in the G1 interval of the cell cycle that is controlled by the product of gene cdc 28 (15). Expression of the cdc 28 product is essential for the duplication of the spindle pole body on the nuclear membrane (4). The cdc 28 mediated step precedes the actual initiation of DNA replication by at least two other steps, those controlled by the products of genes cdc4 and cdc 7 (15). The cdc 28 controlled step is also the step at which mating factors arrest haploid cells, apparently in order to synchronize the two cell cycles before cell fusion during conjugation (3,34), and hence sensitivity to mating factor provides a convenient test for whether or not a particular cell has passed this point of control. Starvation of prototrophic S. cerevisiae cells for any one of a variety of essential nutrients also synchronizes the cell cycles at the cdc 28 step (2,30,35, Pringle and Maddox, personnal communication). The cdc 28 mediated step has been termed "start" because it controls the commitment of the cell to division (12). The experiments that demonstrated a correlation between completion of the start event and the attainment of a critical cell size in S. cerevisiae involved shifting cells from nutrient-sufficient to nutrient-deficient conditions and vice versa as well as shifts of temperature-sensitive mutants to the restrictive temperature (18). It is possible that the change in conditions imposed upon the cell during these shifts induced control mechanisms that do not operate during steady-state growth. For example, the ability of the cell to divide before the daughter bud has attained a size comparable to that of the parent after nutrient starvation might be a special property of starved cells. It is the purpose of this report to examine the growth and division of S. cerevisiae cells under steady-state conditions to determine whether the hypothesis of a size requirement for completion of the start event remains tenable. For all experiments except a few of those reported in Fig. 6, cells grown in liquid medium were in YNB (18) and those grown on solid medium were on YNB containing 10 g/liter noble agar (Difco Laboratories, Detroit, Mich.). In a few experiments reported in Fig. 6, cells were grown in YNB containing supplements for auxotrophic requirements or in YM-1 (10). Cells were grown on solid medium or in liquid medium in flasks with rotary shaking at a temperature of 22~176 Cells grew more slowly in liquid medium than on solid medium despite low ratios of culture medium to flask volume and rapid shaking, and the cells in liquid displayed a higher proportion of unbudded cells. The difference in the fraction of unbudded cells was just about what would be produced by slowing down the growth rate with cycloheximide on solid medium to that observed without cycloheximide in liquid medium. Although it is not necessary to compare directly cells grown in liquid to those grown on solid medium for the argu-ments to be made below, it is probably correct to compare the cells that are growing at the same growth rate under the two conditions rather than to compare cells growing with the same concentration of cycloheximide. The viability of strain 2180A ceils growing in a steady state in YNB liquid medium containing various concentrations of cycloheximide was determined. One thousand individual cells were scored by time-lapse photomicroscopy for their ability to form microcolonies on solid medium without cycloheximide. Over the range of concentrations of cycloheximide used in the experiments reported in this paper, between 93 and 99% of the cells were viable. Measurement of Cell Parameters Procedures for determination of the cell number, the proportion of unbudded cells (18), and the number of bud scars per cell (5) have been described previously. The volumes of individual cells were calculated from phase-contrast micrographs, assuming that the yeast cell is a prolate spheroid (28). The micrographs were enlarged by projection and the major and minor axes of the cell were measured with a graf/pen digitizer (model GP-3) (Science Accessories Corp., Southport, Conn.) interfaced with a Hewlett-Packard calculator (model 9820A) (Hewlett-Packard Co., Palo Alto, Calif.). To determine the magnification so that absolute cell volumes could be obtained, the grid system of a Petroff-Hausser (C. A. Hausser & Son, Philadelphia, Pa.) counter was photographed with the same optical system, and the magnification was calculated from repeated measurements of this standard. Some cell volume distributions were also obtained with a particle size distribution analyser (Coulter Channelyzer, Coulter Electronics Inc., Hialeah, Fla.). The analyzer was calibrated using 22.26and 73.62-#,m 3 polystyrene beads. Time-Lapse Photomicroscopy An overnight stock culture grown in YM-1 medium was diluted 10-or 30-fold and 0.1 ml was spread onto a YNB-agar plate. Cells were pregrown for 18-24 h at room temperature (22~176 on plates containing 1% noble agar (Difco Laboratories) and the same concentration of nutrients and inhibitor to be used in the timelapse photography. Cells were washed off of the plate with 1 ml of YNB liquid medium, agitated on a vortex mixer for 30 s, and a drop was placed onto a 12 • 30 • 1 mm slab of agar. The cells were allowed to settle out for 1-2 rain, and then the slide was placed in a vertical position to permit the liquid to run off the cells and the surface to dry. A nylon screen (1.5 mm between fibers) that had been previously washed in ethanol and water was placed over the cells to provide a frame of reference. Photographs were taken at room temperature (22 ~ 24~ at intervals of 10-20 rain for 6-12 h depending upon the growth rate of the cells. Individual cells were then scored for their pattern of budding from the pro-jected negatives. All initially unbudded cells were scored until a total of five cell units (a unit is a parent cell or a bud) had appeared, and all scored cells are reported unless they could not be followed unambiguously (due to crowding) for the full course of the experiment. When cells were pregrown in liquid medium or sonicated before the time-lapse experiments, then the population exhibited deviations from exponential growth during time-lapse photography and hence these procedures were not used. Unequal Division Exponentially growing populations of S. cerevisiae C276 were followed by time-lapase photography. The doubling time of each population was determined by counting the total number of cell units (each parent cell and each bud is a unit) in the same field at successive times ( Fig. 1 medium at 23~ with a doubling time of 167 min in the absence of cycloheximide and 321 min in the presence of 0.060 ~g/ml cycloheximide. The slight deviations from exponential growth may be a consequence of statistical fluctuations resulting from the limited sample sizes or may be due to a small perturbation in the cells resulting from the culture transfer. To investigate the distribution of generation times of individual cells, the intervals between successive budding events of initially unbudded cells from the exponentially growing culture were scored. The first generation time of the parent cell (P1 in Fig. 2) was equated to the interval of time from the appearance of its first bud until the appearance of its second bud, and the second generation was the interval from the parent cell's second bud until its third bud (P2 in Fig. 2). The histograms of first and second generation times were unimodal ( Fig. 3) with means of 132 -.+ 26 rain and 138 --+ 25 (here and elsewhere, standard de-/ F q~ q FmuRE 2 Definition of parent and daughter generation times from time-lapse photomicrographs. An initially unbudded cell (whose origin as a daughter or a parent from a previous cycle is unknown) is observed to bud. After some interval, defined as the first parent generation (P1), the parent cell buds for a second time. The daughter buds next, marking the end of the daughter generation (D). The second parent generation (P2) is defined as the interval from the parent cell's second budding until its third. Division of the parent from the bud occurs after interval A in the first parent generation and after interval A' in the second; division of the daughter from its first bud occurs after interval A". The parent and daughter cells are separated in the diagram after division for clarity, but they actually remain together on the agar surface; consequently, the divisions which are shown in parentheses cannot be seen in the photographs. Histogram of parent and daughter generation times. Cells of strain C276 growing on YNB agar plates at 23~ without cycloheximide were photographed at 10-min intervals, and the intervals between successive budding events were scored from the photographs. Panel A is for the first parent generation, panel B for the second parent generation, and panel C for the daughter generation. IA viations are given) min, respectively. Because the cells undergoing the first generation in this experiment included cells that are budding for their first time as well as cells (in decreasing proportion) that are budding for their second, third, etc., time and because the histograms for first and second generation are unimodal and approximately the same, we are justified in concluding that parent cells (cells that have a bud or have produced one or more buds) have approximately the same generation time for at least their first two to three cycles. It is customary to compute generation times from one division to the next. Our method of utilizing the appearance of buds as the boundaries separating generations is necessitated by the fact that the time of division of the cells cannot be determined from photographs. Inspection of the diagram in Fig. 2 reveals, however, that the interval from the appearance of the parent's first bud until its second is identical to the interval from the division of the parent from its first bud until the division of the parent from its second bud, providing a parent cell has the same generation time (and the same allocation of this time to pre-and postbudding states) in generation n + 1 as it had in generation n (i.e. in Fig. 2, interval A = interval A', and hence interval A + B = interval B + A'). The generation time of the daughter is defined as the interval from its first appearance as a bud on the parent cell until it produces a bud of its own (interval D, Fig. 2). Inspection of Fig. 2 reveals that this interval is identical (with the same proviso as above) to the interval from the division of the daughter from the parent cell until its first division as a parent from its first bud (i.e. in Fig. 2 interval A = interval A", and therefore interval A + C = interval C + A"). The generation time for the daughter is also unimodal with a mean of 203 --. 38 min (Fig. 3). The generation time of the daughter is significantly longer than that for the parent cell, and the overall doubling time of the population must be a composite of these two. A simple model of the cell cycle that accounts for these observations is presented in Fig. 4. We assume that all parent cells have the same generation time regardless of the number of daughters that they have produced previously. Further, we assume that daughter cells have a longer generation time during their first cell cycle that is ac-counted for entirely by the period before the time that they first bud. This formulation of the S. cerevisiae cell cycle was suggested previously (17), and we will present quantitative data in its support. The standard age distribution equation (26) does not apply to a system undergoing unequal division, and a different formulation must be employed (see Appendix). The age distribution equation for the model presented in Fig. 4 can be used to derive a relationship between the generation time of the parent cell, the generation time of the daughter, and the population doubling time (see Appendix, Eq. 8). Solution of this equation for the population doubling time by numerical approximation gives a value of 165 min, and the agreement with the observed value of 167 min (Table I) is strong support for the model of Fig. 4. As a second test of this formulation, we have The 1st and 2nd parent generation times and the daughter generation times were determined by time-lapse photography as defined in Fig. 2 calculated the expected proportion of unbudded cells in this exponential population that are buds, i.e. have not budded before, and have measured this quantity by staining cells for bud scars. Unbudded buds were distinguished from unbudded parent cells in that the former have no bud scars while the latter contain one or more. The calculated value was 82% (see Appendix, Eq. 9) and the observed value was 78.8 --+ 2.0%; the agreement with expectation was considered satisfactory. Population Dynamics under Limiting Protein Synthesis We have determined the generation times of parent and daughter cells when growth was limited by low concentrations of cycloheximide (Fig. 1). Cells were pregrown for 18-24 h in growth-limiting concentrations of cycloheximide and then followed by time-lapse photomicroscopy. The population doubling time, the parent generation time, and the daughter generation time were determined, The parent and daughter generation times increase with increasing concentrations of cycloheximide, as does the population doubling time ( Table I). The assumption in the model of Fig. 4 that all parent cells have the same generation time is supported by the observation that the first and second parent generation times are in reasonably good agreement although the second generation may be slightly faster than the first at the slower growth rates. The calculated population doubling time agrees reasonably well with the observed doubling time, and this result suggests that even under limiting growth conditions the model of Fig. 4 is valid. A further test of the validity of the model of Fig. 4 under conditions of limiting growth is provided by a comparison of the observed frequency of parent cells (those with one or more bud scars) among the unbudded cells with the frequency expected from the age distribution equation (Appendix, Eq. 9). The observations are in satisfactory agreement with expectation (Table II). It is possible to separate the generation times into two intervals, the budded and the unbudded intervals. We have measured the frequency of budded cells in populations growing asynchronously on agar plates containing various concentrations of cycloheximide by washing the cells off the plate and counting the budded and unbudded cells. In the same experiment the population doubling times were determined. The fraction of budded cells can be converted to the interval of time In contrast to the budded period, the unbudded intervals are greatly prolonged as the growth rate is depressed. From the slopes of the curves in Fig. 5, it is evident that most of the increased generation time at slower growth rates is due to the increase in the unbudded intervals. Relation of Growth to Division The parent cell changes little in volume over the course of the budded interval. The parent portion of budded cells was measured for cells growing exponentially in YNB liquid medium without cycloheximide, and the volumes were computed for 29 parents with small buds (0.0-0.05 the volume of the parent) and for 20 parents with large buds (between 0.55 and 0.75 the volume of the parent; these are the largest buds present). All of the cells had a single bud scar (in the neck between parent and bud) and hence were in their first parental cycle. The average volume was 78,6 4-12.4 /xm 3 for cells with small buds and 84.6 ---9.4/zm 3 for cells with large buds. Thus, the parent cell changes relatively little in volume over the course of the budded interval. The temporal relationships between the unbudded interval, the budded interval, and the generation times of daughter and parent lead to certain expectations for the size of cells. Because the length of the budded interval is relatively independent of growth rate, and because the parent cell changes little in volume over the course of the budded interval, one would expect the size of the bud at the time of division to become progressively smaller at slower growth rates. This occurs because the cell devotes almost the same amount of time to the production of a bud whether it is growing rapidly or slowly. Even at the fastest growth rates encountered in these experiments, the bud does not reach the size of the parent cell at division. This conclusion was arrived at from two sets of observations necessitated by the fact that the photographic resolution of cells growing on agar is not sufficient to permit accurate measurement of cell size, and by the fact that in cells removed from liquid for high resolution phasecontrast microscopy, the identity of parent and bud cannot be determined. First, a naive observer was asked to tell which of the two units in a parent-daughter complex selected from time-lapse photographs to be at the time of division (10 min before the next budding of the parent cell) was the larger. In 68 out of 70 cases the observer picked the parent as the larger, in two cases the observer said that they were about the same, and in no case did the observer say that the bud was bigger. From this result, we felt justified in assuming that the larger component of a parent-daughter complex was the parent cell, and measurements were then made on cells growing in YNB liquid meduim where high resolution phase-contrast photographs could be obtained, but where the identity of the parent could not be determined. In a control culture growing with a population doubling time of 200 min, the volumes of the parent portion and bud portion of 394 budded cells were determined and the ratio of bud volume to parent cell volume was computed and plotted as a histogram. The parent portion of the budded cells had a mean of 96.2 4-18 /zm 3. As an estimate of the size of the bud at division, we take the value of the bud to parent volume ratio at the 95th percentile of the histogram, i.e. the value of the bud to parent volume ratio that was as great as that observed for 95% of the population, which was 0.77. The same measurement was made for 265 cells growing exponentially in 0.060 /~g/ml cycloheximide at a doubling time of 348 min. The parent portion of the budded cells had a mean volume of 109 -+ 33 /.tin 3, and the bud to parent volume ratio at the 95th percentile was 0.43 for this culture. Hence the bud is significantly smaller than the parent cell at the time of division in the control culture. Furthermore, when the growth rate is slowed by limiting the rate of protein synthesis, the bud becomes smaller at the time of division. The data from the time-lapse experiments (Fig. 5) indicate that the parent cell has a detectable unbudded period at fast growth rates, and that this interval becomes progressively longer at slow growth rates. This fact suggests that the parent cell might become progressively larger each time it produces a bud. We have measured the volume of the parent portion of budded cells that were growing in medium with a generation time of 200 min and correlated these measurements with the number of bud scars on the parent cell (Table III). The data indicate that parent cells increase in volume by an average value of about 23% each generation. Since this is much larger than the amount of increase exhibited by a parent cell during the budded period (7%), most of this increase must be occurring during the unbudded interval. Breakdown of the Unbudded Interval into Pre-and Post-a-Factor Execution The point of mating factor arrest is the first known step in the cell cycle and is the point at which nutritionally limited cells arrest (2, 30, 35, Pringle and Maddox, personal communication) and the point at which growth and division are integrated (18). It was important therefore to determine how the increased length of the unbudded interval that occurs during growth limitation with cycloheximide is apportioned between the time Haploid cells of strain 2180a were grown in low concentrations of cycloheximide for 24-48 h to achieve a steady state. They were then placed on solid medium containing a-factor and photographed at successive time intervals. Cells that were originally unbudded either remained unbudded and produced morphologically altered cells termed schmoos, or budded to produce two cells both of which then produced schmoos. The former class was considered to be before and the latter was considered to be subsequent to the point of t~-factor arrest at the time of the shift. The interval of the unbudded period that precedes and succeeds the point of ~-factor arrest is recorded in Table IV for a variety of growth rates. The former varies more than sixfoid over the growth rates examined while the latter varies 1.5-fold. Consequently, the dramatic increase in the unbudded period that occurs at slower growth rates (Fig. 5) occurs almost exclusively in the unbudded interval before the point of mating factor arrest. Other Protein Synthesis lnhibitors We wished to determine whether the preferential lengthening of the unbudded phase of the S. cerevisiae cell cycle by cycioheximide was a general response to a limitation of protein synthesis or a specific response to this inhibitor. A number of inhibitors and temperature-sensitive mutations * Calculated from the fraction of the cells that were unbudded, using Eq. 3 (Appendix), :~ The interval of the unbudded period that proceeded the point of a-factor arrest. Calculated from the proportion of unbudded cells that failed to divide in the presence of a-factor. w The interval of the unbudded period that suceeded the point of a-factor arrest. Calculated as the difference between the second and third columns. that are known to block protein synthesis in S. cerevisiae were tested to see whether depressed, exponential growth rates could be attained at moderate levels of inhibitor or intermediate temperatures. We were able to attain steady-state conditions for the inhibitors mimosine (29) and trichodermin (32), for the aminoaeyl tRNA synthetase mutations, ils 1 (13) and mes 1 (22), and for the mutation prt 1 which blocks the initiation of polypeptide chains (14). The temperature-sensitive protein synthesis mutants were grown at a variety of temperatures, and the inhibitor-sensitive strains were grown in different concentrations of inhibitor; the growth rate as well as the fraction of budded cells was determined. The length of the budded period was then calculated, using the age distribution equation (Appendix, Eq. 3). A plot of the increase in the length of the budded period as a function of the increase in generation time is recorded in Fig. 6. Included in these data are experiments in which cycloheximide was used as growth inhibitor for three different strains. We have not attempted to designate each strain and growth limitation because all strains behaved similarly. The result in all cases was that the length of the budded period changed relatively little with increasing growth rates. The linear regression line through these points has a slope of 0.17 min/min for the rate of change of the budded period as a function of population doubling time, a value that is identical to that obtained for strain C276 growing on solid medium containing various concentrations of cycloheximide (Fig. 5). It is possible that curves other than a straight line would provide a better statistical fit to the data, but we have not investigated this possibility. Therefore, the major consequence of a limitation of growth at the level of protein biosynthesis is a lengthening of the unbudded interval of the cycle. DISCUSSION We have examined the growth and division of S. cerevisiae cells under steady-state conditions when the rate of protein synthesis was growth rate limiting. We observe that the cells divide unequally: the bud is smaller than the parent cell at division, and the length of the next cycle is longer for the bud than for the parent. These inequalities become more pronounced as the rate of protein synthesis is depressed. Furthermore, the parent cell remains relatively constant in volume throughout the budded portion of the cycle, and the length of the budded interval varies only slightly as the growth rate is depressed. Although a systematic investigation of the cycle times of parent and bud at different growth rates has not been reported previously, it is worth noting that each of these five observations has been reported numerous times (see the discussion of reference 18 for an exhaustive review on the relationship between growth and division of S. cerevisiae cells), and there can be no doubt about the generality of these phenomena under a variety of conditions. Two particularly pertinent prior studies are those of Von Meyenburg (31) and Barford and Hall (1). Von Meyenburg found that the proportion of unbudded cells increased dramatically as the generation time increased in glucose-limited chemostats / I I I I I I I I I I 1 I I 0 400 ~00 1200 1600 increose in population doubling time (min) and concluded that all of the increase in generation time was occurring during the unbudded interval of the cell cycle. Barford and Hall noted a greater than 20-fold lengthening of the G1 interval for cells growing on ethanol compared to those growing on glucose; the increase in G1 accounted for most of the increase in generation time. The results reported herein are, at least qualitatively, what would be expected from a model presented previously for the coordination of growth and division in S. cerevisiae (18). The model proposed that growth rather than progress through the DNAodivision cycle is normally ratelimiting for cell proliferation and that a critical cell size must be attained before the completion of the start event in G1. Because the length of the budded phase does not change markedly with growth rate, it is apparent that a parent cell has about the same amount of time to produce a bud at slow growth rates as it does at fast growth rates. Furthermore, because the parent cell remains relatively constant in volume throughout the budded phase, essentially all of the growth that occurs during this time is apportioned to the bud at division. It follows that the size of the bud should be smaller at division for cells growing more slowly. This is what we observe; the bud was estimated to have a volume 0.77 that of the parent cell at division for cells growing with a doubling time of 200 min, and 0.43 for cells with a 348-min doubling time. If the cell must attain a critical size before it can begin a cell cycle, then we would expect the daughter (bud) to have a longer unbudded phase than the parent cell, and the length of the unbudded interval should increase as the growth rate decreases. This expectation is also fulfilled by the observations. It would be even more satisfying if we could test the quantitative agreement between our data and expectation. To make a quantitative comparison, however, one must make some ad hoc assumptions about the way in which individual cells grow. If we assume that individual cells increase their masses exponentially with the same rate constant throughout the cell cycle, then the constant must of course be the same as that for the population as a whole. We can then ask whether the observed time intervals for unbudded and budded periods are consistent with the maintenance of a steady state in the culture with respect to cell growth and division. For example, if we assign a cell that is budding for the first time a mass too, then the daughter of this parent must reach mass mo at the time it buds for its first time. This necessity arises from the fact that the cells are growing under steady-state conditions and is not dependent upon any particular models for growth or division. For the discussion that follows it is more convenient to consider the mass, rap, of a daughter cell that is P time units from the next division (this point in the cycle will be called the reference time; see Fig. 4). This cell has grown for D -P time units since the last division, and will bud in P -B time units, where P is the parent cell generation time, D is the daughter cell generation time, and B is the length of the budded interval. The steady state assumption demands that the daughter of this cell also reach mass m r after D time units have elapsed. If all of the mass increase that occurs after this cell buds is distributed to its daughter bud at division, then the new daugher will have a mass of rope '~~ (e '~ -1) at the reference time in the next cell cycle. This expression has been evaluated in Table V (model 1) for the five different growth rates, and it is evident that the steady state is not maintained under this set of assumptions, especially at the slower growth rates. That is, for all growth rates the mass of a new daughter at the reference time is considerably less than the mass of its parent one cycle earlier. Of course, any one of our assumptions about the way the individual cells grow or apportion mass between daughter and parent could be altered to accommodate the steady state. We will present only one possible change that has an interesting biological implication. If we assume that a parent cell contributes to its bud at division, the mass accumulated during the entire P interval (rather than just the mass accumulated during the B interval), then the mass of the daughter at the reference time would be rope ~~ (e ~P -1). Evaluation of this expression for the different experiments shows reasonably satisfactory agreement with the steady-state as- (Table V, model 2). Hence one way of maintaining the steady state is for the parent cell to contribute the mass it accumulates during its unbudded period (in addition to that accumulated during its budded period) to its daughter at division. However, another difficulty arises with this assumption. If the parent cell contributes all of its mass increase to the daughter cell, then the parent cell would not be expected to increase in size in successive generations. But we observe, as have others (16,21,24), that the parent cell does increase in volume each generation (Table IV). These difficulties in accounting quantitatively for the growth of individual cells may be more apparent than real as a consequence of our ignorance regarding how the cell measures its size. It is fairly clear that growth is necessary specifically for completion of the first step in the cell cycle, the step that is sensitive to mating factor and is controlled by the cdc 28 gene product. For reasons of convenience, we have used time and volume as measures of this growth requirement in these experiments and total mass or protein content in other studies (18). These gross parameters of cell size are not well coordinated during the cell cycle (11,23,33), and it is likely that the cell is monitoring some event other than these (like the amount of one specific protein) that may be only loosely correlated with volume, mass, and total protein content. In fact, a histogram for cellular volume of the parent portion of cells with small buds that are in their first parental generation is quite broad wit.h a mean of 76.0 --+ 14.3 tzm 3 (data not shown). Clearly, volume itself is not the parameter that the cell monitors. In short, a qualitative consideration of the data is all that appears to be warranted at the present time. Arrest of cell division at the start event is also observed when prototrophic cells are starved for any one of a variety of nutrients (2, 30, 35, Pringle and Maddox, personal communication). An attempt to locate a signal in the form of a metabolic intermediate of the sulfate assimilation pathway led to the conclusion that if a single signal existed it must be at or subsequent to methionyl-tRNA (30). The fact that accumulation of cells before the start event(s) occurs under a variety of conditions that limit polypeptide initiation and elongation suggests that the controlled response to nutritional starvation and the mechanism for maintaining size homeostasis may be one and the same. At the current state of our understanding, both of these phenomena can be explained by assuming that some particular protein, e.g., the initiator substance of Donachie (7), is made at a constant differential rate of total protein synthesis and that a sufficient amount of this protein must accumulate to permit completion of the start event. Other models are also tenable. APPENDIX Consider an asynchronous, exponentially multiplying cell population in which cells progress through the cycle as diagrammed in Fig. 7. The number of cells, N(t), present at any time, t, is given by where a = In 2/T, T being the population doubling time. The position of a particular cell in the cell cycle is defined as the time, r, it will take that cell to reach division. Thus, r is a metric of the age of a given ceil, has a value of 0 at division, increases from right to left along the abscissa of Fig. 7, and has a maximum value of D, the daughter cell generation time. We shall assume that there is no dispersion in the daughter or parent cell generation times. Although the data (Fig. 3) demonstrate a dispersion of measured generation times as well as a skewness to longer generation times, we ignore these complications for two reasons. First, the simpler model is mathematically more tractable, and seond, we cannot assess how much of the dispersion is due to the behavior of the cells and how much is introduced by the measurement procedure (the photographs were not always of perfect clarity, and they were taken at 10-to 20-min intervals). Furthermore, the simple model appears to be adequate since the predictions made by it are in good agreement with the data (Tables I and II). Let g(a,b) be the number of cells contained in an interval of the cycle between ~ = a and ~-= b at time t = 0. All of the new cells produced in the population during an interval of time, t, where t < P (the parent cell generation time), arise by division of cells that lie at time t = 0 in the interval of the cycle between r = 0 and ~-= t. To derive the equation for the ordinate of Fig. 7 for z > P, we must consider the origin of new cells in the population for t > P. The increase in cell number during the interval between time P and t where t > P will result from the division of cells located at time t = 0 in the interval z = P to z = t plus the division (for the second time) of cells located at time t = 0 in the interval 1" = 0 to z=t-P, N(t) -N(P) = g(t, P) + g(t -P, 0). A similar argument for intervals 2P < z < 3P 9 nP< r < (n + 1) P demonstrates that Eq. 7 is valid for all ~-> P. Thus, Eqs. 4 and 7 describe ~ (the ordinate of Fig. 7) as a function z (the abscissa) for 0 -< z < P and P < r, respectively. Since a = In 2/T, Eq. 8 is the relationship between the parent generation time, the daughter generation time, and the population doubling time. With S. cerevisiae it is possible to distinguish daughter cells (who lack bud scars) from parent cells (with bud scars) and it is useful therefore to derive their expected frequencies according to the theory of Fig. 7. For example, the proportion of unbudded cells that are daughters, u(d), and hence have no bud scar, is found by integrating Eq. 7 between D (the daughter cell generation time) and B (the length of the budded period) and dividing this result by the total number of unbudded cells. The later quantity is found by integrating Eqs. 4 Received for publication 23 May 1977, and in revised form 18 July 1977.
2014-10-01T00:00:00.000Z
1977-11-01T00:00:00.000
{ "year": 1977, "sha1": "403cbbbdf75587d12b9433e91741351698c5bffd", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/75/2/422/1072756/422.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "403cbbbdf75587d12b9433e91741351698c5bffd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16762441
pes2o/s2orc
v3-fos-license
Cardiovascular disease risk factors in homeless people Background Cardiovascular diseases (CVD) are associated with significant morbidity and mortality, which is highest in Eastern Europe including Estonia. Accumulating evidence suggests that life-style is associated with the development of CVD. The aim of this study was to evaluate the informative power of common CVD-related markers under unhealthy conditions. Subjects Subjects (n = 51; mean age 45 years; 90% men) were recruited from a shelter for homeless people in Tallinn, Estonia, and consisted of persons who constantly used alcohol or surrogates, smoked, and were in a bad physical condition (amputated toes, necrotic ulcers, etc.). Methods Blood pressure, pulse rate, and waist circumference were measured, and body mass index (BMI) was calculated. The following markers were measured in blood serum: total cholesterol (TChol), high-density lipoprotein cholesterol (HDL-Chol), low-density lipoprotein cholesterol (LDL-Chol), plasma triglycerides (TG), apolipoproteins A-l (ApoA1) and B (ApoB), lipoprotein(a) (Lp(a)), glycated hemoglobin (HbA1c), glucose (Gluc), high-sensitivity C-reactive protein (hsCRP), serum carbohydrate-deficient transferrin (CDT), gamma-glutamyltransferase (GGT), alanine aminotransferase (ALT), and aspartate aminotransferase (AST). Except smoking, the anamnestic information considering eating habits, declared alcohol consumption and medication intake were not included in the analysis due to the low credibility of self-reported data. Results More than half of the investigated patients had values of measured markers (hsCRP, TChol, LDL-Chol, TG, HbA1c, ApoA1, ApoB, Lp(a), Gluc) within normal range. Surprisingly, 100% of subjects had HDL-Chol within endemic norm. Conclusion This study demonstrates that traditional markers, commonly used for prediction and diagnosis and treatment of CVD, are not always applicable to homeless people, apparently due to their aberrant life-style. Introduction Cardiovascular diseases (CVD) are one of the most common clinical diagnoses in the world associated with significant morbidity and mortality, which is highest in Eastern Europe including Estonia (1)(2)(3). Also compared to international standards CVD readings (risk factors) in Estonia are considered to be higher (4,5). A large body of experimental data and results from cross-sectional studies indicate relationships between CVD and total cholesterol (TChol), low-density lipoprotein cholesterol (LDL-Chol), high-density lipoprotein cholesterol (HDL-Chol), high-sensitivity C-reactive protein (hsCRP), plasma triglycerides (TG), and glucose (Gluc) tolerance (6)(7)(8). These markers are automatically used to predict, diagnose, and estimate the effectiveness of the CVD treatment but still leave atherosclerosis-related diseases a major challenge for scientists and cardiologists. CVD is also considered to be the leading cause of death among people who live unsalubrious lives-homeless people (9,10). Undoubtedly, there are factors which are influenced by people themselves, e.g. unhealthy diet, physical inactivity, smoking, and alcohol abuse (11), but at the same time controversial connections have been demonstrated between lifestyle and CVD risk factors (12)(13)(14). Considering Estonia, homelessness became apparent in the mid-1990s (15), but the data concerning the number of homeless people are questionable. The main reasons for becoming homeless in Estonia are unemployment (85%) and/or alcoholism (60%) (16). Most of homeless individuals have health problems, but to keep their self-esteem the self-estimation of the health condition is generally considered not bad (17). In addition there are so far no studies carried out in Estonia describing the health problems specific to homeless people from a clinical point of view nor their morbidity pattern and death rates. Through investigating homeless people (persons who live unsalubrious lives and are commonly 'CVD-labeled'), the aim of this study was to assess traditional CVD-related markers under unhealthy conditions and to evaluate the actual informative power of traditional risk markers as risk factors or predictive markers for CVD. Subjects The study was approved by the Tallinn Medical Research Ethics Committee, and written informed consent was obtained from all of the participants (46 males and 5 females, mean age 45 ± 12.5 years, range 19-66 years). The recruitment and the procedures (measurements, blood sample drawing, etc.) took place in the District Shelter (situated in Mustamäe Tallinn, Estonia) which is a place where homeless people can stay for the night (when not alcohol-intoxicated) but no systematic food provision, medication, or any other services are rendered. A homeless person was considered eligible for the investigation if the following criteria were met: . did not have permanent job . did not have a regular income . did not have a permanent home . constantly used alcohol or surrogates . did not have systematic (regular) eating habits . was not engaged in regular physical activity. The pre-recruitment procedure consisted of selection of the asocial contingent, based on the knowledge of the shelter staff, who excluded those subjects (7 out of 58) whom they met for the first time or had seen rarely. All the others (n = 51) were confirmed as eligible for the study, based on the above-mentioned criteria. The selection, with assistance from the staff, was followed by administration of a questionnaire consisting of questions about education, work, lifestyle (how many years of being homeless, eating habits, smoking, drugs and alcohol consumption (what and how much and for how long time), physical activity (how long walks and how many hours outside)) and medical background (what illnesses and what kind of treatment). The self-reported data mentioned below, based on participants' answers, were not included (except smoking-94% of participants confirmed to be smokers) in the analysis, due to low credibility, which was found through many reasons: when specific information met a contradiction with the data given by the staff of the shelter, subjects denied or seemed to be ashamed of their condition, were doubtful, did not remember. Still the questionnaire gave us the extra proof that they really were suitable for the study considering our criteria. The mean time of homeless status was approximately 4 years (3.6 ± 3.2 years). Even if the visible physical condition spoke for itself, half of the participants denied or had a doubt of having any illness, pain, or trauma. The other half of the participants mostly claimed to have head, back, or leg traumas, but pneumonia, arthritis, and asthma were also mentioned-25% used painkillers but the specific necessary treatment or medication was still not obtained. Also, for the purpose of finding out the participants' medical background, a search of the medical data was performed, which did not give any significant results. All participants confirmed using alcohol or surrogates almost every day but did not give an overview specifically what, in what amount, and for how long periods of time they had been consuming. Even though they all said that they had approximately two meals a day, it could not be counted as systematic eating due to the fact that no food was served at the shelter and also the information given by the participants was doubtful (subjectivity arose as to what is meant by the term 'a meal'). Contradictions were also found concerning information about physical activity-participants claimed to walk approximately 11 km (10.8 ± 7.1) a day, but seeing their physical conditions this was largely questionable. Methods Blood pressure (BP) and pulse rate were measured by using a sphygmomanometer in a sitting position after 5 minutes of rest. In addition to weight and height information, waist circumference was also registered (measuring tape positioned mid-way between the top of the hip-bone and the bottom of the rib-cage). Body mass index (BMI) was calculated as the weight in kilos divided by the square of the height in meters. Blood sample collection and transport All blood samples were obtained in a sitting position. Guidelines, indicating the importance of fasting from food before blood sampling, were given to each participant before the testing period. After an overnight fast, blood samples were collected between 8.00 and 11.00 a.m., with standard method from vena cubitalis using Vacutainer collection tubes (BD Vacutainer, Belliver Industrial Estate, Plymouth; Becton, Dickinson and Co., UK) as follows: for Gluc determination with preservatives (fluoride oxalate); for glycated hemoglobin (HbA1c) with ethylenediamine tetra-acetic acid (EDTA); for alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyltransferase (GGT), hsCRP, CDT%, TChol, HDL-Chol, LDL-Chol, and TG with clot activator without any more additives or preservatives. Obtained samples were stored without centrifugation and immediately transported at room temperature to the laboratory. For apolipoprotein A-1 (ApoA1), apolipoprotein B (ApoB), and lipoprotein(a) (Lp(a)) determinations whole blood was collected in EDTA collection tubes and centrifuged within 1 hour to obtain plasma, which was subsequently stored and transported on ice to the laboratory. Other laboratory samples, needed for serum separation, were centrifuged and kept at +4 C till assessed. All determinations were performed by following the standard procedures-on the day of collection and within 12 hours. Laboratory methods Markers from blood serum were determined by using different methods as follows. ALT and AST: reaction rate assessment based on the conversion of NADH to nicotinamide adenine dinucleotide (NAD); GGT: kinetic method based on gamma-glutamyl group transference to glycylglycine; Gluc:enzymatic reference method with hexokinase; hsCRP levels: particleenhanced immunoturbidimetric method; TChol: cholesterol oxidase technique; HDL-Chol and LDL-Chol: homogeneous enzymatic colorimetric assay; TG: glycerol phosphate oxidase technique after enzymatic cleavage of fatty acids. All of these determinations were performed by reagents from Roche (Roche Diagnostics, Mannheim, Germany) using Roche Integra 800 analyzer. Blood HbA1c was determined on Roche Integra 400 analyzer using the particle-enhanced immunoturbidimetric method. Final results were expressed as percent HbA1c from total Hb according to Diabetes Control and Complications Trial/ National Glycohemoglobin Standardization Program (DCCT/NGSP) protocol. CDT measurement was performed by latex-enhanced reagent from Dade Behring using Behring BN analyzer (Siemens Healthcare Diagnostics, Deerfield, USA). Apolipoproteins ApoA1 and ApoB were analyzed by using immunoturbidimetric assays and Lp(a) by using particle-enhanced immunoturbidimetric assays (Roche Diagnostics, Mannheim, Germany). The intensity of the turbidity, proportional to the concentration of antibody-antigen complexes, was measured by the Roche Integra 400 analyzer. Statistical analysis Statistical analyses were performed using MS Excel and Statistical package R. The characteristics of studied persons were presented as mean ± standard deviation. 95% confidence intervals of percentages were evaluated with the exact binomial test in R. Results The comparison of data based on mean values versus the reference values (RF) would not give an adequate overview because the single extreme values would significantly skew the average results. Considering this kind of subject distribution (how they fitted within the RF), percentage evaluation in addition to average values was chosen. During the examination most of the subjects were or seemed to be in a bad physical condition-amputated toes, necrotic ulcers, and other leg and back traumas. Still considering the aim of the study (the scheme was not to analyze their physical state and medical condition) the true condition stayed unknown. Data in brackets are represented as follows: mean ± SD, range. For indices where reference values are different between male (M) and female (F) study participants the mean ± SD and range values are There were no subjects who had all measured markers in the endemic norm. Discussion On the basis of a large body of evidence, TChol, LDL-Chol, HDL-Chol, and TG are automatically used in worldwide clinical practice as the first selection to predict the risk of myocardial infarction (MI) and also to estimate CVD treatment effectiveness. Interestingly our study consisting of persons who live unsalubrious life showed no significant differences concerning average values of TChol, HDL-Chol, LDL-Chol, and TG compared to reference (endemic norm) values. Also, the percentage distribution demonstrated that significantly more than half of persons studied had these markers in the normal range (Table I). Next, two ratios (TChol/HDL-Chol and LDL-Chol/HDL-Chol) are used worldwide for acute MI risk estimation in all ethnic groups, in both sexes and all ages (19). In our study group these ratios were also within the normal range (Table I). ApoB is one of the principal proteins of the atherogenic lipoprotein particles (LDL, intermediatedensity lipoprotein (IDL), very-low-density lipoprotein (VLDL)), and ApoA1 is a major protein component of HDL. A number of clinical studies demonstrate the association between low ApoA1 levels and an increased risk of MI and coronary artery disease. Also an increased ApoB/ApoA1 ratio has a role in clinical CVD development, including subclinical atherosclerosis (20). In addition, according to a large comprehensive study (INTERHEART study) the ApoB/ApoA1 ratio was superior for the estimation of the risk of acute myocardial infarction in all ethnic groups, in both sexes and all ages (19). Our study results showed a contradiction-no differences between average values of ApoA1 and ApoB compared to reference values were found. The percentage distribution demonstrated that 84.3% of persons had normal ApoA1, and 56.9% had normal ApoB value. The values related to TChol and its fractions have been shown to be significantly influenced by lifestyle (physical activity, diet, etc.) and, through this, related to a higher risk of CVD development (21,22). Our study subjects, who were physically inactive and on unhealthy diets, had predominantly normal average TChol and HDL-Chol values. Over half of the subjects had normal LDL-Chol levels. There is evidence that elevated levels of TG are associated with increased risk of MI and ischemic heart disease (23). Also it is known that systematic aerobic training-load reduces TG levels (24). In our study only six persons (11.8%) had elevated levels of TG. This phenomenon cannot be explained by systematic activity because the persons under investigation were in very bad physical condition. At the same time no relationships have been found between TG and chronic alcohol consumption (25). The effect of physical activity in our study is contradictory because, for on the one hand it is relatively necessary to move (walk) a lot to look for food from the trash bins, and on the other hand the subjects' very bad muscular and skeletal system (amputated toes, necrotic ulcers, complications after bone fractures, etc.) does not allow an intensive physical load. One explanation for this relatively low/normal TG level can be low-fat food consumption (which has not been investigated in this study). CVD predictive markers (TChol, LDL-Chol, HDL-Chol, and TG) used in everyday clinical practice did not have alarming concerns among homeless people (more than half of the subjects had mentioned Table I). It seems that some other risk factors (ApoB and ApoA1) have also a certain limitation in their application. What kind of risk markers/aspects should be taken into account when this kind of population is under investigation? It seems that inflammation-related markers are more informative. Among the subjects, the mean hs CRP value was twice as high as the reference value (<5 mg/L), but percentage evaluation showed that 58.8% of investigated persons had normal values. This kind of distribution was influenced by two persons whose values were above100 mg/L. This kind of hsCRP elevation is obviously caused by chronic inflammations due to what subjects suffer without getting any cure or medication (legs wounds, pneumonia, dermatitis, etc.). The risk of cardiovascular morbidity and mortality is greatly affected by cigarette smoking (26), and in our study 94% of participants were smokers. Smoking is claimed to be associated with a lower socioeconomic status (27), and the prevalence of nicotine dependence among alcohol or other substance abusers is extremely high (28) and is well documented among homeless people (29)(30)(31). The measurement of CDT is the only test approved by the Food and Drug Administration (FDA) for determining heavy alcohol use (32). Our results for CDT% showed higher mean values of hepatic enzymes compared to reference values. Surprisingly 45.1% of investigated persons had normal values of CDT. In the scientific literature, a faster resting pulse rate has been shown to be associated with a higher risk of developing hypertension and a greater incidence of cardiovascular morbidity and mortality (26,33).In our study 68.6% of investigated patients had elevated pulse rates. This can be explained by sympathetic over-activity, which is related to the following circumstances: smoking, alcohol intoxication, physical inactivity, depression, and poor self-rated health (34). Considering that in the shelter there was no systematic eating and no opportunity to cook, the BMI of the investigated persons was normal compared to the reference value. Only one person (c. 2% of the research population) was underweight, and waist circumference indices were all at normal range or above. It can be concluded from this that the investigated group of homeless people did not suffer from starvation. An explanation can be found in recent research, which brought forth theinterconnection between long-term alcohol consumption and heightened BMI (35). Unfortunately, accurate nutrition research is methodically difficult to conduct, and also it was not the aim of the given research. Persistent elevations in HbA1c level increase the risk for the long-term vascular complications of diabetes such as coronary disease, heart attack, stroke, heart failure, kidney failure, blindness, erectile dysfunction, neuropathy (loss of sensation, especially in the feet), gangrene, and gastroparesis (slowed emptying of the stomach). Elevated levels of HbA1c can also be associated with disorders of glucose metabolism, but among our investigated persons most levels (88.2%) were within normal range. Of course our study had some limitations. Because experimentally it is unethical to recruit people to live under unhealthy conditions, the selection of the study contingent was made considering the objective of the study-to examine CVD markers in a social group living unsalubrious lives. The homeless people are not an ideal group of choice for the study, because the credible information of their prior life history (results of medical examination and medical records, etc.) is missing, and some results of the questionnaire were contradictory. Based on life condition differentialities and social status profile between countries (climate, donations/ foundations for the homeless) our participants do not constitute a representative sample of homeless people. Also, they were in a bad physical condition, which raises questions about the situation elsewhere. Based on that, generalizations cannot be made. An ideal structure of the research would also be to assess the 'amount' of the unhealthy life-style choices, i.e. the quantity of alcohol and drugs consumed, smoking, physical capacity, etc. In this study, only objective results of the physical and laboratory analysis were used. Originally it was planned to include a controlgroup, but to find people whose life-style was 100% objectively proven and correspondent to that of homeless people was difficult (not controllable). Hence, the outcome was compared with reference indices (analogously used in daily clinical practice). Confidence interval gives us the opportunity to generalize the research results to the whole homeless contingent. Conclusions Our study demonstrates that traditional markers used for prediction and diagnosis and treatment of CVD may not demonstrate the same information pattern among people who live unsalubrious liveshomeless people. The fact is that CVD is still the leading cause of death among homeless people (9,10), and this kind of life-style cannot be understood to be healthy for the cardiovascular system, but it certainly throws light on the complexity and multifactorial etiology of CVD development mechanisms and may demonstrate the weak points of widely used diagnostic markers. Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.
2014-10-01T00:00:00.000Z
2011-06-29T00:00:00.000
{ "year": 2011, "sha1": "696e200bd8ce928269505fbdb06efa5847aba140", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3109/03009734.2011.586737", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "696e200bd8ce928269505fbdb06efa5847aba140", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
261513878
pes2o/s2orc
v3-fos-license
Semantic-Aware Representation Blending for Multi-Label Image Recognition with Partial Labels Training the multi-label image recognition models with partial labels, in which merely some labels are known while others are unknown for each image, is a considerably challenging and practical task. To address this task, current algorithms mainly depend on pre-training classification or similarity models to generate pseudo labels for the unknown labels. However, these algorithms depend on sufficient multi-label annotations to train the models, leading to poor performance especially with low known label proportion. In this work, we propose to blend category-specific representation across different images to transfer information of known labels to complement unknown labels, which can get rid of pre-training models and thus does not depend on sufficient annotations. To this end, we design a unified semantic-aware representation blending (SARB) framework that exploits instance-level and prototype-level semantic representation to complement unknown labels by two complementary modules: 1) an instance-level representation blending (ILRB) module blends the representations of the known labels in an image to the representations of the unknown labels in another image to complement these unknown labels. 2) a prototype-level representation blending (PLRB) module learns more stable representation prototypes for each category and blends the representation of unknown labels with the prototypes of corresponding labels to complement these labels. Extensive experiments on the MS-COCO, Visual Genome, Pascal VOC 2007 datasets show that the proposed SARB framework obtains superior performance over current leading competitors on all known label proportion settings, i.e., with the mAP improvement of 4.6%, 4.%, 2.2% on these three datasets when the known label proportion is 10%. Codes are available at https://github.com/HCPLab-SYSU/HCP-MLR-PL. Introduction Multi-label image recognition (MLR) (Chen et al. 2019d,b;Wu et al. 2020), which aims to find out all semantic labels from the input image, is a more challenging and practical task compared with the single-label counterpart. Due to the complexity of the input images and output label spaces, collecting a large-scale dataset with complete multi-label annotation is extremely time-consuming. To deal with this is- sue, recent works tend to study the task of multi-label image recognition with partial labels (MLR-PL), in which merely a few positive and negative labels are provided whereas other labels are unknown (see Figure 1). MLR-PL is more practical to real-world scenarios because it does not require complete multi-label annotations for each image. Previous works (Sun et al. 2017;Joulin et al. 2016) simply ignore the unknown labels or treat them as negative, and they adopt traditional MLR algorithms to address this task. However, it may lead to poor performance because it either loses some annotations or even incurs some incorrect labels. More recent works (Durand, Mehrasa, and Mori 2019;Huynh and Elhamifar 2020) propose to train classification or similarity models with given labels, and use these models to generate pseudo labels for the unknown labels. Despite achieving impressive progress, these algorithms depend on sufficient multi-label annotation for model training, and they suffer from obvious performance drop if decreasing the known label proportion to a small level. Fortunately, a specific label c that is unknown in one image I n may be known in another image I m . We can extract the information of label c from image I m , blend this information to image I n , and in this way complement the unknown label c for image I n . Previous works (Zhang et al. 2017) utilize mixup algorithm to blend two images and generate a new image with semantic information from both images to help regularize training single-label recognition models. However, a multi-label image generally has multiple semantic objects scattering over the whole image, and simply blending two images lead to confusing semantic information. In this work, we design a unified semantic-aware representation blending (SARB) framework that learns and blends category-specific feature representation to complement the unknown labels. This framework does not depend on pre-trained models, and thus it can perform consistently well on all known label proportion settings. Specifically, we first introduce a category-specific representation learning (CSRL) module (Chen et al. 2019b;Ye et al. 2020) that incorporates category semantics to guide generating category-specific representations. An instancelevel representation blending (ILRB) module is designed to blend the representations of the known label c in one image I m to the representations of the corresponding unknown label c in another image I n . In this way, image I n can also contain the information of label c and thus this label is complemented. This module can generate diverse blended representations to facilitate the performance but these diverse representations may also lead to unstable training. To solve this problem, a prototype-level representation blending (PLRB) module is further proposed to learn more robust representation prototypes for each category and blend the representation of unknown labels with the prototypes of the corresponding categories. In this way, we can simultaneously generate diverse and stable blended representations to complement the unknown labels and thus facilitate the MLR-PL task. The contributions of this work are summarized into three folds: 1) We propose a semantic-aware representation blending (SARB) framework to complement unknown labels. It does not depend on pre-trained models and performs consistently well on all known label proportion settings. 2) We design the instance-level and prototype-level representation blending modules that generate diverse and stable blended feature representation to complement unknown labels. 3) We conduct extensive experiments on several large-scale MLR datasets, including Microsoft COCO (Lin et al. 2014), Visual Genome (Krishna et al. 2016) and Pascal VOC 2007(Everingham et al. 2010, to demonstrate the effectiveness of the proposed framework. We also conduct ablative studies to analyze the actual contribution of each module for profound understanding. Related Work MLR with Complete/Partial Labels. Multi-label image recognition receives increasing attention in the computer vision community due to its wide application to scene recognition (Chen et al. 2019a;Zhang et al. 2020;Liu, Wu, and Lin 2015), human attribute recognition (Guo et al. 2019;Zhu et al. 2017;Chen et al. 2021b), etc. Previous works depend on object localization technology (Wei et al. 2016) or visual attention mechanism (Wang et al. 2017;Chen et al. 2018b) to discover discriminative regions and enhance feature representation to facilitate classification. Considering the guidance of semantics to visual representation learning (Chen et al. 2021a), recent works further introduce category semantics to help learn category-specific discriminative regions (Chen et al. 2019b;Wu et al. 2020), e.g., Semantic Decoupling (SD) module (Chen et al. 2019b;Wu et al. 2020), Semantic Attention Module (SAM) (Ye et al. 2020) and Class Activation Maps (CAM) (Gao and Zhou 2021). On the other hand, label correlations exist commonly among different categories and these correlations are also important for multi-label recognition. Recent works resort to graph neural networks (Abadal et al. 2022;Chen et al. 2020a) to explicitly model these correlations to learn contextualized feature representation to facilitate multi-label recognition (Chen et al. 2019d,b;Wu et al. 2020;Ye et al. 2020;Chen et al. 2020b). Training traditional multi-label image recognition models depends on large-scale datasets with complete annotations per image. To reduce the annotation cost, the current effort (Durand, Mehrasa, and Mori 2019;Huynh and Elhamifar 2020) is dedicated to the MLR-PL task, in which merely a few labels are known while the others are known for each image. Earlier works (Sun et al. 2017;Joulin et al. 2016) formulate MLR as multiple binary classifications, and simply ignore missing labels or treat missing labels negative. Then, they train traditional multi-label models for this task, which leads to poor performance because they lose some data or even incur noisy labels. Inversely, more recent works tend to generate pseudo labels. For example, Durand et al. (Durand, Mehrasa, and Mori 2019) pre-train classification models with the given annotations and generate pseudo labels for the unknown labels based on the trained models. Then, they use both the given and updated labels to re-train the models. Huynh et al. (Huynh and Elhamifar 2020) propose to learn image-level similarity models to generate pseudo labels and progressively re-train the model similarly. However, these algorithms rely on sufficient multi-label annotations for model training, leading to poor performance when the known label proportions decrease to a low level. Different from all these algorithms, our SARB framework learns and blends category-specific feature representation across different images to complement the unknown labels. It gets rid of pre-training models and can obtain consistently well performance on all known label settings. Blending Regularization. Mixup (Zhang et al. 2017;Yun et al. 2019;Kim, Choo, and Song 2020) is recently proposed to blend two input images thus as to generate more diverse samples to regularize training. As a pioneer work, Zhang et al. (Zhang et al. 2017) directly perform pixel-wise blending between two images and it can obtain quite an impressive improvement for single-label image recognition. Cutmix (Yun et al. 2019) further proposes to randomly cut one region from an image and paste it to another image to generate new samples. Despite achieving impressive performance, these algorithms are very difficult to apply to the multi-label recognition scenarios, because a multi-label image inherently possesses multiple semantic objects scattering over the whole image, and simply blending two images may generate disturbed and confusing information. Different from the mixup algorithm, the SARB framework proposes to learn and blend category-specific representation, in which the blending is performed between two representation vectors that belong to the same category. In this way, we can utilize the semantic representation of known labels to complement the representation of the unknown labels, and thus to complement these unknown labels. Figure 2: An overall illustration of the proposed semantic-aware representation blending (SARB) framework. It consists of the ILRB and PLRB modules that perform instance-level and prototype-level representation blending to complement unknown labels. The classifier φ is shared. Semantic-aware Representation Blending Overview In this section, we introduce the proposed SARB framework that consists of two complementary modules that perform instance-level and prototype-level representation blending to complement unknown labels, i.e., the ILRB and PLRB modules. The ILRB module blends the semantic representations of known labels in one image to the presentations of the unknown labels in another image to complement these unknown labels. Meanwhile, the PLRB module learns representation prototypes for each category and blends the representation of the unknown labels of the training image with the corresponding prototypes to complement these unknown labels. Finally, both the ground truth and complemented labels are used to train the multi-label models. Figure 2 illustrates an overall pipeline of the proposed framework. Given a training image I n , we utilize a backbone network to extract the global feature maps f n , and then introduce a category-specific representation learning (CSRL) module that incorporates category semantics to generate categoryspecific representation where C is the category number. There are different algorithms to implement the CSRL module, including semantic decoupling proposed in (Chen et al. 2019b) and semantic attention mechanism proposed in (Ye et al. 2020). Then we follow previous work (Chen et al. 2019b(Chen et al. ,c, 2018a(Chen et al. , 2021b to use a gated neural network and a linear classifier followed by a sigmoid function to compute the probability score vectors Based on the learned category-specific semantic representation, the ILRB and PLRB modules are used to complement the feature representation of the unknown labels. We introduce these two modules in the following. Instance-Level Representation Blending Intuitively, an unknown label c in image I n may be known in another image I m . The ILRB module aims to blend the information of label c in image I m to image I n , and thus image I n can also have the known label c. To achieve this end, we blend the representations that belong to the same category and from different images to transfer the known labels of one image to the unknown labels of the other image. Formally, given two training images I n and I m , whose learned semantic representation vectors are , and label vectors are y n = {y n 1 , y n 2 , · · · , y n C } and y m = {y m 1 , y m 2 , · · · , y m C }, we blend the semantic representations and labels for each category. For category c, the blending process can be formulated aŝ where α is the learnable parameter and its initial value is set to 0.5. We repeat the above blending process for all categories, and reformulate them as matrix operations for efficient computingF are the feature matrices for all categories of image n and m;F are the blended semantic representation and label matrix. Then, we use a gated graph neural network and linear classifier followed by sigmoid function to compute the probability score vectorŝ n . Prototype-Level Representation Blending Although the ILRB module can obviously improve the performance, it may disturb the training process because it generates many diverse blended representation for training, especially when the known label proportion is low. To deal with this issue, we further design a PLRB module that learns to generate more stable representation prototypes for each category and blend the representation of unknown labels in image I n with the prototypes of corresponding categories. The prototypes are used to describe the overall representation of the corresponding category. For each category c, we first select all the images that have the known label c, and then extract the representations of this category, resulting in the feature vectors [f 1 c , f 2 c , · · · , f Nc c ]. Then, we simply use the K-means algorithm to cluster these feature vectors into K prototypes, i.e., P c = [p 1 c , p 2 c , ..., p K c ]. It is expected that the representations of the same category is similar, and thus it can learn more compact distribution to better compute the prototypes for each category. To achieve this end, we utilize contrastive loss for increasing the similarity between f n c and f m c if images n and m have the same existing category c, and decreasing the similarity otherwise. Thus, it can be formulated as where cosine(·, ·) represents a function that computes the cosine similarity between the input. The final contrastive loss can be formulated as Given an input image I n whose learned semantic representation vectors [f n 1 , f n 2 , · · · , f n C ] and corresponding label vectors y n = {y n 1 , y n 2 , · · · , y n C }, we randomly select a label c that is unknown, then randomly select a prototype from P c and blend it with the representation of label c, formulated as where β is a also learnable parameter, and it is initialized as 0.5; random() represents a random sampling function which means we randomly choose one unknown category to blend semantic representation per image; k is randomly sampled in [1, ..., K] and obeys uniform distribution. We repeat the above blending process for all categories, and reformulate them as matrix operations for efficient computing: where B = [β 1 , β 2 , · · · , β C ] is a parameter vector; F n = [f n 1 , f n 2 , · · · , f n C ] and P k = [p k 1 , p k 2 , · · · , p k C ] are the feature matrices for all categories of image n and prototype k; n C ] andỹ n = [ỹ n 1 ,ỹ n 2 , · · · ,ỹ n C ] are the blended semantic representation and label matrix. Then, we use a gated graph neural network and linear classifier followed by the sigmoid function to compute the probability score vectors n . Optimization Following previous works, we utilize the partial binary cross entropy loss as the objective function for supervising the network. In particular, given the predicted probability score vector s n = {s n 1 , s n 2 , · · · s n C } and the ground truth of known labels, the objective function can be defined as [1(y n c = 1) log(s n c ) where 1[·] is an indicator function whose value is 1 if the argument is positive and is 0 otherwise. Similarly, we adopt the partial binary cross entropy loss as the objective function for supervising the ILRB module and PLRB module, i.e., (ŷ n ,ŝ n ) and (ỹ n ,s n ). Therefore, the final classification loss is defined as summing the three losses over all samples, formulated as [ (y n , s n ) + (ŷ n ,ŝ n ) + (ỹ n ,s n )]. Finally, we sum over the classification and contrastive losses of all samples to obtain the final loss, formulated as Here, λ is a balance parameter that ensures the contrastive loss L cst has a comparable magnitude with the classification loss L cls . Since L cst is much larger than L cls , we set λ to 0.05 in the experiments. Experiments Experimental Setting Implementation Details For fair comparison, we follow previous work to adopt the ResNet-101 (He et al. 2016) as the backbone to extract global feature maps. We initialize its parameters with those pre-trained on the ImageNet (Deng et al. 2009) dataset while initializing the parameters of all newly-added layers randomly. We fix the parameters of the previous 91 layers of ResNet-101, and train the other layers in an end-to-end manner. During training, we use the Adam algorithm (Kingma and Ba 2015) with a batch size of 16, momentums of 0.999 and 0.9, and a weight decay of 5 × 10 −4 . We set the initial learning rate as 10 −5 and divide it by 10 after every 10 epochs. It is trained with 20 epochs in total. For data augmentation, the input image is resized to 512×512, and we randomly choose a number from {512, 448, 384, 320, 256} as the width and height to crop patch. Finally, the cropped patch is further resized to 448×448. Besides, random horizontal flipping is also used. To stabilize the training process, we start to use the ILRB and PLRB modules at epoch 5, and re-compute prototypes of each category for every 5 epochs. During inference, the ILRB and PLRB modules are removed, and the image is resized to 448×448 for evaluation. Since all the datasets have complete labels, we follow the setting of previous works (Durand, Mehrasa, and Mori 2019;Huynh and Elhamifar 2020) to randomly drop a certain proportion of positive and negative labels to create partially annotated datasets. In this work, the proportions of dropped labels vary from 90% to 10%, resulting in known labels proportion of 10% to 90% . Evaluation Metric For a fair comparison, we adopt the mean average precision (mAP) over all categories for evaluation under different proportions of known labels. And we also compute average mAP over all proportions for a more comprehensive evaluation. Moreover, we follow most previous MLR works (Chen et al. 2019b) to adopt the overall and per-class precision, recall, F1-measure (i.e., OP, OR, OF1, CP, CR, and CF1) for more comprehensive evaluation. We present the formulas of these metrics and detailed results in the supplementary material due to the paper limit. Comparison with the State-of-the-art Algorithms To evaluate the effectiveness of the proposed SARB framework, we compare it with both the conventional MLR and current MLR-PL algorithms: 1) Conventional MLR Algorithms: semantic-specific graph representation learning (SSGRL) (Chen et al. 2019b), multilabel image recognition graph convolution network (GCN-ML) (Chen et al. 2019d), knowledge-guided graph routing (KGGR) (Chen et al. 2020b). Through exploring label dependencies or capturing semantic information, these methods achieve state-of-the-art performance on the traditional MLR task. For fair comparisons, we adapt these methods to address the MLR-PL task by replacing BCE loss with partial BCE loss. 2) Current MLR-PL Algorithms: partial binary cross entropy loss (partial-BCE) (Durand, Mehrasa, and Mori 2019), Curriculum Labeling (Durand, Mehrasa, and Mori 2019). It is worth noting that partial-BCE not only is easy to implement but also achieves state-of-the-art performance on the MLR-PL task. Performance on MS-COCO We first present the performance comparisons on MS-COCO in Table 1 and Figure 3(a). Our SARB framework obtains the overall best performance over current state-of-the-art algorithms. As shown in Table 1, it achieves the average mAP, OF1, and CF1 of 77.9%, 76.5%, and 72.2%, outperforming the previous bestperforming KGGR algorithm by 2.3%, 2.8%, and 2.5%, respectively. As shown in Figure 3(a), the SARB framework also achieves better mAP over all known label proportion settings. It is noteworthy that the SARB framework obtains more obvious performance improvement when decreasing the known label proportions. For example, the mAP improvements over the previous best KGGR algorithm are 1.4% and 4.6% when using 90% and 10% known labels, respectively. These comparisons demonstrate that the SARB framework can be adapted to different proportion settings as it does not depend on pre-trained models. Performance on VG-200 As previously discussed, VG-200 is a more challenging benchmark that covers much more categories. Thus, current works achieve quite poor performance. As shown in Table 1, the previous best-performing KGGR algorithm obtains the average mAP, OF1, and CF1 of 41.5%, 41.2%,and 33.6%. In this scenario, our SARB framework exhibits much more obvious performance improvement. Its average mAP, OF1, and CF1 are 45.6%, 45.0%, and 37.4%, outperforming the KGGR algorithm by 4.1%, 3.8%, and 3.8%. We also present the mAP comparisons over different known proportion settings in Figure 3(b). Compared with current algorithms, we find that our framework achieves the mAP improvement of more than 3.3% on all known label proportion settings. Performance on Pascal VOC 2007 Pascal VOC 2007 is the most widely used dataset for evaluating multi-label image recognition. Here, we also present the performance comparisons on this dataset in Table 1 and Figure 3(c). As this dataset covers merely 20 categories, it is a much simpler dataset and current algorithms can also achieve quite well performance. However, our SARB framework can still achieve consistent improvement. As shown, it improves the average mAP, OF1, and CF1 by 0.7%, 0.5%, and 1.1%. In addition, it exhibits a similar phenomenon that the mAP improvement is more obvious when using the fewer known labels, with 0.4% and 2.8% mAP improvement using 90% and 10% known label proportions as shown in Figure 3(c). Figure 3: The mAP of our SARB framework and current state-of-the-art competitors on the settings of known label proportions of 10% to 90% on the MS-COCO (left), and Pascal VOC 2007 (right) Ablative Studies In this section, we conduct ablative studies to analyze the actual contributions of each module in our SARB framework. Analysis of the CSRL Module The CSRL module is used to extract category-specific feature representation and is a basic module of the proposed framework. There are different kinds of algorithms to implement the CSRL module, in which semantic decoupling (SD) (Chen et al. 2019b) and semantic attention mechanism (SAM) (Ye et al. 2020) are two choices that obtain stateof-the-art performance for the traditional MLR task. Here, we conduct an experiment to compare these two algorithms and present the results in Table 2. It shows that using the two algorithms obtain comparable performance. More concretely, using SD obtains slightly better performance than using SAM, with an average mAP improvement of 0.3%, 0.2%, and 0.1% on the three datasets. Thus, we use the SD to implement the CSRL module for all other experiments. Current mixup (Zhang et al. 2017) simply performs position-wise blending to generate new samples to regularize training. In this part, we further conduct two baseline algorithms that perform position-wise blending in image space and feature space (namely IP-Mixup and FM-Mixup) to verify the benefit of learning category-specific feature representation. As shown in Table 2, both two baseline algorithms achieve comparable performance with the SSGRL baselines as such simple blending can not provide additional information. Compared with the SARB using CSRL, IP-Mixup suffers from the average mAP degration of 3.6%, 5.9%, and 1.0%, while FM-Mixup suffers from the average mAP degration of 3.8%, 6.0%, and 1.1% on the three datasets, respectively. Contribution of the SARB Module As we use the SD algorithm to implement the CSRL module and gated neural network for classification, SSGRL SARB consists of the instance-level and prototype-level representation blending modules. In the following, we further conduct experiments to analyze these two modules for more in-depth understanding. Analysis of the ILRB Module To analyze the actual contribution of the ILRB module, we conduct experiments that merely use this module (namely, Ours ILRB) and compare it with the SSGRL baseline on the MS-COCO, VG-200, Pascal VOC 2007 datasets. As shown in Table 2, it obtains an average mAP of 77.3%, 44.9%, 90.2% on MS-COCO, VG-200, Pascal VOC 2007, with the mAP improvement of 3.2%, 5.2%, and 0.7%, respectively. ILRB contains an crucial parameter α that controls the ratio of instance-level mix-up. However, it is impractical and exhausting to find a best value for different datasets and different settings. In this work, we set α as a learnable parameter to adaptively learn the best value via standard backpropagation. To verify its contribution, we conduct an experiment to compare with the baseline using a fixed α of 0.5. As shown in Table 2, using a fixed value of 0.5 decreases the average mAPs from 77.3%, 44.9%, and 90.2% to 76.9%, 44.5%, and 89.8%, respectively. Analysis of the PLRB Module Similarly, PLRB is another module that plays a key role, and in this part, we also analyze its effectiveness by comparing the performance with and without it. As shown in Table 2. Adding the PLRB module to the baseline SSGRL leads to 3.2%, 5.2%, and 0.9% mAP improvement on the MS-COCO, VG-200, and Pascal VOC 2007 datasets. As previously suggested, the PLRB module can help to generate stable blended representations to complement unknown labels, which leads to more stable training. To validate this point, we further visualize the loss of the training process in Figure 4. It can be observed that the loss is choppy without the PLRB module, and adding this module can stabilize the training process. The parameter β is a learnable parameter that is adaptively learned for different datasets and settings. Here, we also conduct experiments to compare with the setting that fixes β to 0.5 on the MS-COCO, VG-200, Pascal VOC 2007 datasets. As presented in Table 2, it obtains an average mAP of 76.9%, 44.6%, 90.2% on these three datasets, with the slight degeneration of 0.4%, 0.3% and 0.2%. Conclusion In this work, we present a new perspective to complement the unknown labels by blending category-specific feature representation to address the MLR-PL task. It does not depend on sufficient annotations and thus can obtain superior performance on all known label proportion settings. Specifically, it consists of an ILRB module that blends instancelevel representation of known labels to complement the representation of corresponding unknown labels and a PLRB module that leans and blends prototype-level representations to complement the representation of corresponding unknown labels. It can simultaneously generate diverse and stable blended representations to complement the unknown labels and thus facilitate the MLR-PL task. Extensive experiments on the MS-COCO, VG-200, and Pascal VOC demonstrate its superiority over current algorithms.
2022-03-07T06:47:22.593Z
2022-03-04T00:00:00.000
{ "year": 2022, "sha1": "33bd9c2f77db502ac08ba1327e559b61cc19b5f8", "oa_license": null, "oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/20105/19864", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6affdac2fe5c569f05406137933acf11e102cd04", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
18862620
pes2o/s2orc
v3-fos-license
A Target Classification Decision Aid A submarine's sonar team is responsible for detecting, localising and classifying targets using information provided by the platform's sensor suite. The information used to make these assessments is typically uncertain and/or incomplete and is likely to require a measure of confidence in its reliability. Moreover, improvements in sensor and communication technology are resulting in increased amounts of on-platform and off-platform information available for evaluation. This proliferation of imprecise information increases the risk of overwhelming the operator. To assist the task of localisation and classification a concept demonstration decision aid (Horizon), based on evidential reasoning, has been developed. Horizon is an information fusion software package for representing and fusing imprecise information about the state of the world, expressed across suitable frames of reference. The Horizon software is currently at prototype stage. INTRODUCTION The combat system is an integral part of the command and control of a naval vessel as it is responsible for the collection, processing and transmission of this information. Recent advances in sensor technology and communications have seen a dramatic increase in the amount of data available for processing. This translates to an increase in the amount of information available to be evaluated by command (e.g., target classification by the sonar supervisor). Often this information is uncertain, incomplete and inconclusive and is likely to include a measure of confidence in the reliability of its source (see section 2). In target classification and threat assessment one must use all available information to determine the target's identity and capabilities. To reduce the risk of information overload and assist the sonar supervisor in performing classification in an accurate and timely manner, a concept demonstration decision aid (Horizon) has been developed. Horizon is based on a methodology for representing and reasoning with information from disparate sources, expressed across a number of frames of reference, called evidential reasoning (Lowrance et a/ 1991 ). Horizon provides an environment for the operator to propagates and fuse the initial information to produce a measure of confidence in a target's classification. The initial information can be translated, discounted or fused, using one of three algorithms, in a graphical manner. Horizon is a concept demonstration software package that is also being applied to mine threat evaluation (Mansell 1996) and air combat post mission analysis (Mansell 1997) domains. This paper discusses the information fusion problem as it applies to target classification and the development of an information fusion decision aides (Horizon). Horizon draws on the established evidential reasoning techniques and adds a new fusion algorithm for combining dependent evidence, as well as automated discounting and explanation facilities. The paper also highlights some of the practical issues associated with designing an information fusion system to be used by non-technical domain experts (Navy personnel). TARGET CLASSIFICATION The process of target classification has numerous definitions in navies around the world. These definitions essentially provide a methodology for deriving a contact description at one of the following classification levels; (I) general classification (e.g., submarine) (2) vessel type (e.g., SSK) and operation condition, (3) nationality and class (e.g., Canadian Oberon class), or (4) particular unit (e.g., Onondaga). The target classification problem should not be considered a finite process; rather it is the derivation of essential contact information from a dynam ic environment. The operator' s overall task in this process is to participate with command in target detection, localisation, and identification (i.e., target acquisition). This is achieved by the operator interpreting the data presented on his screen(s), and translating that into informative reports to command to permit tactical decisions to be made (Donald, 1996a, b). To assist the operator in target classification a decision aid must be sufficiently flexible to reason at all levels of classification (be it general classification, type, nationality, class, or unit) using the information as it becomes available. The operator will use the decision aid to continuously evaluate the available evidence and report classification information to command. The level of classification in the report depends on the quality of information and current operational conditions. To further complicate the classification process, the operator must deal with information that is derived from an environmentally hostile medium (ie the ocean). As an example, sound transmission often occurs in surges and fades, and rarely will all radiated frequencies be detectable at the same time and place. It is the sonar supervisor' s task to consider all the available, imprecise sensor and intelligence information and to provide a classification for sonar contacts. That includes information from the following sources: • Raw bearing only information interpreted by the acoustic (sonar) operators. • Individual tone information provided by processing the radiated sounds in narrow frequency bands (e.g., gear tonals). • Harmonic sets information as it relates to machinery on the target platform. Radar. • Intelligence identifying what one expects to fmd in the area. Horizon has been developed as a flexible decision aid that can reason with imprecise information from the above sources. Horizon uses a unique combination of existing and novel evidential reasoning techniques to provide a mathematically rigorous formalism for combining bodies Theory). Being a departure from classical probability theory, E-R uses information that is typically uncertain, incomplete and error-prone. E-R maintains the association between the measure of belief and disjunctions of events rather than forcing probabilities to be distributed across atomic possibilities. E-R is used to assess the effect of all pieces of available evidence on a hypothesis, making use of domain-specific knowledge. A propositional space called the frame of discernment (or frame) is used to defme a set of basic statements, exactly one of which may be true at any one time, and a subset of these statements is defmed as a propositional statement. For example, a frame, eA> may be used to represent a targets category (this implementation of the target classification domain currently uses 16 frames to describe the environment as shown in Figure 1). Once frames have been established, basic probability assignments (BPA) are use to make probabilistic assessments about the confidence in propositional statements relative to the frame. Belief assigned to non atomic propositional statements explicitly represents the lack of information available to resolve between the propositions, resulting in a distribution appropriate to the granularity of the evidence. The term body of evidence (BOE) is used to describe the unit distribution of BPAs over propositional statements discerned by a frame of reference and in accordance with the information source. E-R provides a complete methodology for information integration, including the collection of information in its native frame of reference, discounting (due to the credibility of the source), translation to a related frame, projection into the future (or past), and fusion with other independent BOEs. Compatibility relations are used to characterise interrelationships between different propositional spaces. This allows reasoning to be carried out on information described at different levels of abstraction or on frames of reference with overlapping attributes. Figure 1 shows all the frames used in the target classification domain, with a link between two frames representing the existence of a compatibility relation. In this domain, a compatibility relation between the classification and diesels frames, represents the number of diesel motors known to be available on each platform class. Therefore, evidence about the number of diesel motors observed provides information about the type of platform. E-R uses Dempster's rule of combination (Lowrance et a!, 1991) to fuse multiple independent BOEs into a single BOE, emphasising points of agreement and deemphasising points of disagreement. Dempster's rule is both commutative and associative (ie, evidence can be combined in any order) providing a consensus of what was disparate opinion. Alternative fusion algorithms have also been proposed to counter perceived weaknesses of Dempster's rule. Horizon has implemented Dempster's rule as well as Smets algorithm (Smets, 1993) and we are currently evaluating their strengths and weaknesses. Initial results suggest Smets algorithm may suit this military problem as it provides a conservative weighting of evidence. Further, Smets method of distributing conflicting evidence to the unknown proposition (which states the true proposition may not be an element of the frame) suits the dynamic military domain where weapons and platforms are rapidly evolving. Users may then be trained to recognise that a high value for the unknown proposition may suggest (I) a high level of conflict in the evidence (2) signature data for a target is not included in the database, or (3) the target may be deliberately masking its identity by altering its recognisable signatures. The selection of this method for dealing with uncertainty was not based on competency, as probability theory and fuzzy logic are very capable of representing uncertain information. Instead, E-R was eventually selected over probability theory for its natural representation and manipulation with information contained at different levels of abstraction (Gordon andShortliffe, 1985 andAlmond, 1995) and in different frames of reference. INDEPENDENCE OF EVIDENCE The concept of independence is controversial in the areas of E-R and probability theory (Dawid 1979). Often centring around the areas of experimental independence or conditional independence, these theories have tended to handle dependence inadequately (Pearl 1988, Shafer 1981, Walley 199 1, Kahneman, et a/ 1982. These criticisms are usually based on the difficulty of acquiring the appropriate evidence values, and applying an independence test to the data. It has been proposed that conditional independence between BOEs is not sufficient to guarantee the validity of Dempster's fusion algorithm (Voorbraak 1991). However, these papers fail to provide a decisive argument, or unbiased counterintuitive example, that proves conditional independence combined with screening of BOEs by a domain expert is insufficient for guaranteeing independence of evidence. Hence, when using Horizon one makes the following two assumptions about the BOEs being fused using Dempster's rule: • the human operator can determine whether BOEs are based on the same observations, and are therefore dependent (e.g., two intelligence reports quoting the same source are not independent). There is a growing number of successful applications of E-R to real world problems in the literature, including submarine tracking, sonar data interpretation, anti-air threat identification and naval intelligence analysis (Lowrance et al 1991 THE HORIZON PROGRAM The software package called Horizon 1 (currently at the prototype stage) is a domain-independent E-R system that has been applied to the mine threat evaluation domain and is currently being applied to the target classification problem. One challenge in developing this information fusion software package is to make sure the design does not require the user to understand the intricacies ofE-R (a goal we are still working towards). However, it is anticipated that a certain amount of understanding (training) is required to distribute evidence in an E-R manner, as well as ensure the information is independent. Horizon is a decision-aid program that requires a knowledge engineering process to take place before it can be applied to a problem. This involves capturing the domain by first establishing the frames of reference used to represented BOEs, and generating the compatibility relations between those frames. The amount of knowledge engineering required will depend on the 1 Horizon is defined in Webster's Dictionary as the fullest range or widest limit of perception, interest, appreciation, knowledge, or experience. 361 domain under investigation. The target classification domain consists of 16 frames of reference used to describe different attributes of a target. The compatibility relations are constructed with the aid of an expert, and represent the frames of reference in which one expects information to arrive, or may require conclusions to be presented. Knowledge engineering of the target classification domain presented in this paper was a straightforward, albeit laborious, task. The relationships between platform classification and its various components (e.g., number of shafts) were extracted from reference material such as Jane's Fighting Ships (Sharpe, 1995). This domain describes the ships and submarines from a selection of countries in the pacific theatre; which includes large and small navies such as the USN and Papua New Guinea's Navy respectively. Horizon provides a graphical user interface for the real time display and editing of compatibility relations, called the CR -Editor. The CR-Editor has two types of windows. The Frame Gallery window links frames for which compatibility relations exist (displayed in Figure 1). The CR-windows display the compatibility relations between two frames of reference with the propositions of each frame lined up on each side. Links between these propositions represent information in one frame of reference that is simultaneously true in the other frame of reference. For example, the Oberon class submarine (from the classification frame) in Horizon is linked to: • Australia and Canada (country frame); • SSK (type frame); • 17 (Speed frame in kts); • 2 (number of diesels frame); • I (number of shafts frame). The CR-Editor was used to aid in the knowledge engineering of the target classification domain, and would be used to update the compatibility relations as navies acquire new assets or upgrade current assets. Information collected from the environment (section 2) can be entered into the system in three ways. Firstly, static information (such as a database of platforms) that does not change rapidly can be stored as BOE data files, and read into Horizon when the system is initialised. Secondly, dynamic information can be entered automatically into Horizon's database from the combat system (e.g., sensors, signal processing units, expert systems, etc.). Finally, other dynamic information (such as surveillance reports) can be entered directly into the system as it arrives using the user interface window shown in Figure 2. This requires the user to select the frame of reference, then distribute belief among the listed Figure 2: Windows used to create or edit a new BOE. propositions. The interface window is also used to edit and update all forms of information when required. Horizon represents and manipulates BOEs in an object oriented manner (see Figure 3). Each BOE represented by an icon and is stored in its native frame of reference, where it can be selected to be included in a calculation. At present the calculation operations include discount (reduces the confidence in a BOE), translate (move to a new frame), and fuse (combine BOEs using one of three fusion algorithms). Once the user has selected the BOEs to be included in the calculation, the operation is chosen. If discount is selected the user supplies a discount rate (a percentage between 1 and 100) at which time Horizon produces a secondaryl BOE with a modified belief distribution. If the translate operations are selected, the user is prompted to choose the frame in which the conclusion should be expressed and Horizon generates the new BOE by performing the minimum number of intermediate translations. 2 A secondary BOE is a body of evidence that is generated through the manipulation of initial BOE(s). The operator may also chooses to perform a fusion operation using either Dempster's rule of combination (Shafer 1976), a least commitment algorithm (Mansell 1997), or a dependent evidence algorithm (Mansell 1997). The system will carry out the necessary translations, presenting the resulting BOE in the conclusions window ( Figure 4). The display window presents the pooled evidence for and against all non-zero propositional statements, as well as a measure of uncertainty (being the amount of evidence that neither supports nor contradicts that statement). Horizon is written in Allegro Common Lisp, with the user interface being written in PC/CLIM, making the package portable to either PC or UNIX machines. Horizon's "point-and-click" interface has been commended for its easy of use and provision for rapid interpretation of information (a critical feature of Naval systems). AUTO-DISCOUNTING Horizon also introduces the concept of auto-discounting as a way to condition each BOE prior to fusion. This uses To be consistent with Australian Navy tenninology, Horizon uses three levels of confidence in an infonnation source; certain, probable and possible. Horizon requires each BOE be given a measure of confidence in its source, the default being probable. Each of these measures represents a discount rate for a discount operation. These rates are user configurable3 and are currently set at: • Certain: discount rate = 0% (representing complete confidence in the source). ' Investigations into the appropriateness of these values is ongoing, but initial results are positive. Most of the debate here has been whether Certain should have a zero discount rate, or a very low value (e.g., 5%). The function of auto-discounting is an option that can be selected or de-selected prior to the execution of any operation, and takes place only once at the beginning of any fusion operation (ie, prior to translation). The confidence level is set from the Edit-BOE window using a check box with the three options certain, probable, and possible (see Figure 2). The inclusion of an auto-discounting feature in Horizon serves two purposes. First, automatic discounting of evidence by the identity of its source allows us to minimise personal biases4 contained in BOEs provided by humans. In addition, a sensor's supervisor unit or horizon's user may provide this measure of confidence in a BOE based on recent history or the conditions under which the infonnation was acquired. A Second reason for introducing the measure of confidence in a source's credibility is to circumvent the Zadeh objection (Zadeh, 1984). Zadeh (1984) showed that if a propositional " In my experience, uniformed personnel as a whole tend to overestimate quantitative measures of belief, while conservatively estimating their confidence in that measure. Hence, if one BOE incorrectly overlooks the possibility of a proposition being true, it may still gain significant support from other BOEs after fusion. One drawback to auto-discounting is the information content of the BOE is lowered proportional to a the measure of confi dence (ie, the associated discount rate). The consequence of this redistribution is that results may become inconclusive. This is particularly true when initial information quality is bordering on the inconclusive (i.e., one proposition, or set of propositions, is only slightly favoured over the others after a standard Dempster's Rule fusion). Under these circumstances auto-discounting is not recommended. To counter this, the operator could be trained to use auto-discounting by default. However, when inconclusive result are produced he/she could be trained to redo the fusion without auto-discounting. SENSITIVITY ANALYSIS An essential part of decision aid software is providing the user with some way to review or understand how the conclusion was derived (ie, an explanation facility). Dempster's unnormalised rule. The full implementation of this algorithm and has been reported in Xu and Smets, (1996) and has been adapted to apply to Mansell's (1997) dependent evidence fusion rule. Those BOEs most influential on the conclusion are identified by highlighting the links in the main window (Figure 3). A qualitative explanation may also be displayed in the form of a text window that identifies the most and least infl uential BOEs. In Figure 3 the Eye-Witness BOE had the most impact on the conclusion (signified by a red link to the conclusion BOE). The user may choose to discount or remove Eye Witness and recalculate the conclusion if his/her confidence in the BOE is in question. DISCUSSION Initial results from the target classification domain indicate that independence of evidence is not a serious concern for most sources. However, this may not be the case when dealing with intelligence reports, as it is often difficult to determine independence of sources (particularly when the basis of the reports are unknown). Horizon includes an algorithm to fuse suspected dependent BOEs and the utility of this function is under investigation. Horizon has been trialed on a limited set of synthetic data to demonstrate the suitability of evidential reasoning to Horizon is a technology demonstrator and is therefore not optimised to be fast or efficient. CONCLUSION This paper describes the target classification problem and the Horizon decision aid currently under development. Preliminary examination of this software confmns that evidential reasoning is an appropriate technique for dealing with high level information fusion. Further, the methodology is sufficiently mature to allow the development of robust decision systems that can accurately fuse and display tactical information.
2013-02-06T07:58:18.000Z
1997-08-01T00:00:00.000
{ "year": 2013, "sha1": "3b0975934ebe2a7ae5fd8caa15b2b00f11288e4c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3b0975934ebe2a7ae5fd8caa15b2b00f11288e4c", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
205255556
pes2o/s2orc
v3-fos-license
Analysis of NRAS RNA G-quadruplex binding proteins reveals DDX3X as a novel interactor of cellular G-quadruplex containing transcripts Abstract RNA G-quadruplexes (rG4s) are secondary structures in mRNAs known to influence RNA post-transcriptional mechanisms thereby impacting neurodegenerative disease and cancer. A detailed knowledge of rG4–protein interactions is vital to understand rG4 function. Herein, we describe a systematic affinity proteomics approach that identified 80 high-confidence interactors that assemble on the rG4 located in the 5′-untranslated region (UTR) of the NRAS oncogene. Novel rG4 interactors included DDX3X, DDX5, DDX17, GRSF1 and NSUN5. The majority of identified proteins contained a glycine-arginine (GAR) domain and notably GAR-domain mutation in DDX3X and DDX17 abrogated rG4 binding. Identification of DDX3X targets by transcriptome-wide individual-nucleotide resolution UV-crosslinking and affinity enrichment (iCLAE) revealed a striking association with 5′-UTR rG4-containing transcripts which was reduced upon GAR-domain mutation. Our work highlights hitherto unrecognized features of rG4 structure–protein interactions that highlight new roles of rG4 structures in mRNA post-transcriptional control. INTRODUCTION Recognition of mRNA secondary structures by RNA binding proteins (RBPs) is essential for post-transcriptional control to influence mRNA processing, stability, transport and translation (1,2). Watson-Crick hydrogen bonding and non-canonical interactions are important in RNA folding, and four-stranded G-quadruplex (G4) secondary structures are key structural features in mRNA (3,4). G4 structures form from guanine (G)-rich sequences in which stacks of G-quartets are stabilized by a central metal cation (Figure 1A). Recently, high-throughput sequencing combined with reverse transcriptase stalling at stabilized RNA G4s (rG4) has revealed over 13 000 loci where rG4 structures form within the human transcriptome in vitro (5,6). Evidence supporting rG4 formation in cells includes detection of rG4s in the cytoplasm by immunofluorescence using a G4 structure-specific antibody (7,8). Notably, rG4s are enriched in functionally important regions, including 5 -and 3 -untranslated regions (UTRs) (5,6,(9)(10)(11). Several helicases such as DHX36 and DDX21 bind and unwind rG4 structures with pico-or nanomolar affinities (12)(13)(14). Another multifunctional helicase is DHX9 which binds several nucleic acid secondary structures including G4s but with a preference for RNA substrates (14). Thus, cells possess specialized enzymes that recognize and resolve rG4s which may be important for post-transcriptional processes such as mRNA translation, transport or stability. rG4s have been functionally implicated in several neurodegenerative diseases, such as amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD) and Fragile X syndrome (FXS) (15,16). The underlying cause of FXS is a CGG-rich repeat expansion in the FMR1 gene that contributes to protein silencing due to rG4-mediated translational inhibition (17). Likewise, ALS is defined by a GGGGCC repeat expansion in C9orf72, which leads to a repeat-length-dependent accumulation of aborted rG4containing transcripts (8). It has been proposed that rG4s have roles in cancer development and progression as several 5 -UTRs of oncogene mRNAs are enriched in rG4s (5,10,11,18,19). The presence of a 5 -UTR rG4 hampers folded into a G4 structure; middle, mutated NRAS rG4(NRAS mG4) that is unable to form a G4 structure (green indicates Gs mutated to As); and bottom a stem loop (SL, blue indicates hydrogen-bonded stem bases) from the GUS mRNA. Right, location of the rG4 sequence within the 5 -UTR of the human NRAS transcript. (C) Workflow for AEs and liquid chromatography-tandem mass spectrometry (LC-MSMS). HeLa cell cytoplasmic extracts were incubated with biotinylated rG4 or control oligonucleotides coupled to streptavidin beads. Affinity-purified proteins were subjected to LC-MSMS for subsequent identification. (D) Control AEs of endogenous DHX36. HeLa cell cytoplasmic extracts were incubated with rG4, mrG4 or SL biotinylated oligonucleotides bound to streptavidin beads or with beads alone (beads). Bound protein fractions (AEs) and flow-through (FT) were subjected to SDS-PAGE and stained for total protein (top) or processed for western blotting with a DHX36 antibody (bottom). The presence of DHX36 protein in each lane is presented as a percentage of the signal detected in all lanes below the western blot panel. (E) AEs and western blotting for DHX9, DHX29 and DHX30 using antibodies detecting endogenous helicases as described in (D). Input (IP) was 30 g of cytoplasmic extract. The presence of DHX9 and DHX36 in AEs is presented as a percentage of the signal detected in all lanes below the western blot panel. cap-dependent translation of several oncogene messages including NRAS and BCL2 in vitro (19)(20)(21). As rG4s frequently occur in mRNAs and have important regulatory roles, comprehensive identification of cytoplasmic rG4-interacting proteins is needed to dissect rG4 function. We have therefore used an unbiased affinity proteomics approach to catalog cytoplasmic interactors of the human NRAS 5 -UTR rG4. This rG4 was selected due to the relevance of NRAS in tumorigenesis (22). Moreover, folding of the NRAS rG4 into a stable parallel intramolecular G4 is well-characterized biophysically and this rG4 has been shown to inhibit translation in vitro (20). Herein, we identify cytoplasmic rG4-interacting proteins that have not previously been demonstrated to interact with an rG4 structure. Notably, over half of the rG4 interactors contained a glycine-arginine-rich (GAR) domain, and we show that this is required for the NRAS rG4-DDX3X interaction. This interaction was recapitulated by transcriptomewide individual-nucleotide resolution UV-crosslinking and affinity enrichment (iCLAE) in cells. Overall, our work highlights the utility of identifying rG4-interacting proteins to generate mechanistic insights into rG4-mediated posttranscriptional control. Materials Anti-MYC, Hemagglutinin tag, DHX36, DDX5, DDX17, DHX9, DHX29 and DHX30 antibodies were purchased from Abcam, the V5-tag antibody was obtained from Source BioScience, DDX3X antibody was ordered from Santa Cruz and FXR1 antibody was purchased from Cell Signaling. RNA oligonucleotides were ordered from Integrated DNA Technologies. Streptavidin magnetic beads were obtained from Promega and Strep-Tactin magnetic nanobeads were purchased from IBA. Transfections, cell lysates, western blot and Wes Simple Western analysis HeLa cells were transfected at a confluency of 1 × 10 6 cells with 5 g of plasmid using TransIT-LT1 Transfection Reagent (Mirus Bio LLC). After 45 h, cells were lysed using hypotonic lysis buffer with 0.5% Sodium deoxycholate, 0.5% TritonX100, 2 mM Dithiothreitol (DTT) as previously described in (24). For western blot analysis 30 g of hypotonic extract or 25 l of 50 l affinity enrichments (AEs) in Laemmli sample buffer (Sigma) were loaded 12% sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) gels (ThermoFisher) and proteins were transferred to a nitrocellulose membrane using the iblot2 system (ThermoFisher). Membranes were blocked with Odyssey Blocking Buffer (LI-COR) and incubated with the first antibody followed by a second, fluorophore conjugated antibody (LI-COR). Fluorescent images were established using the Odyssey CLx (LI-COR) and quantification of bands was performed with Fiji image analysis software. Briefly, affinity enriched protein bands were quantified by marking all bands individually with rectangular sections in the gray scale image. The Fiji software then plots peaks representing the intensity of the band in the selected areas. Each peak was expressed as percentage of the total size of all selected bands. Capillary electrophoresis in a Wes Simple Western System (ProteinSimple) was performed as previously described by the manufacturer (Proteinsimple (https://proteinsimple.com)). The Wes immunoassay is a capillary-based system where samples are loaded into the capillary automatically and separated by size as they migrate through a stacking and separation matrix. The separated proteins are then immobilized to the capillary wall via proprietary, photoactivated capture chemistry. Target proteins are identified using a primary antibody and immunoprobed using a horseradish peroxidase (HRP) conjugated secondary antibody and a chemiluminescent substrate. The resulting chemiluminescent signal is detected and quantified. Wes analysis was performed with 1 or 5 g of hypotonic extract or 2.4 l of 50 l AEs in Laemmli sample buffer. Each western blot or 'Wes' Simple Western experiment is a representative of three independent experiments. iCLAE Procedures were performed as previously described (27) with the following alterations. Two 10 cm plates of Flp-In T-REx 293 cells expressing wild-type (WT) or four 10cm plates for RG mutant DDX3X were seeded at a density of 5 × 10 6 and protein production was induced over night with 0.01 g/ml Doxycycline. RBP-RNA interactions were stabilized by UV crosslinking (254 nm, 200 mJ/cm 2 ), followed by lysis in hypotonic lysis buffer. Cytoplasmic lysates were replenished to a final concentration of 50 mM Tris-HCl (pH 7.4) 100 mM NaCl and 0.1% SDS. Subsequently, RNAse/DNAase digestion was performed as previously described (27). AEs utilizing the Strep-tag on DDX3X and RG mutant DDX3X were performed by incubating lysates with Strep-Tactin magnetic beads for 3 h at 4 • C. Reverse transcription (RT) was performed in G4 optimized lithium RT buffer (5). Bioinformatic analysis of AE-LC-MSMS data Significance analysis of NRAS rG4 interactors was performed by using the edgeR package in RStudio (http:// www.rstudio.org/): peptide counts from the AE-LC-MSMS data were fitted with the generalized linear model (glm-Fit) using library size only as normalization, and Pvalues and false discovery rate (FDR) were estimated from the differential signal analysis (glmLRT) by contrasting NRAS rG4 counts to the merged set of negative controls (NRAS mrG4, SL and beads-only). A list of high-confidence interactors was created by selecting only proteins with FDR < 0.05. To represent rG4 interactors in a network, high confidence interactors were imported into Cytoscape version 3.5.1 (www.cytoscape.org) (28) and intersected with physical interactions imported from IMEX-complying databases using PSICQUIC Universal Client app. Functional modules within the network were identified with the MCODE 1.4.2 app (http://baderlab. org/Software/MCODE) and gene ontology annotations were added using the ClueGO 2.3.3 app (http://www. ici.upmc.fr/cluego/cluegoDescription.shtml). Term enrichment was performed by right-sided hypergeometic test with a Benjamini-Hochberg corrected P-value. RNA-seq and iCLAE analysis Raw sequencing files for RNA-seq libraries were preprocessed using cutadapt to remove sequencing adapters and low quality sequencing tails (options -q 10). Trimmed files were aligned to the human genome (GRCh37/hg19) using tophat2 and using the UCSC gtf file provided by Illumina iGenomes as an annotation file (http://support.illumina. com/sequencing/sequencing software/igenome.html). Gene counts were calculated using htseq-count and the same gtf file. Differential expression analysis was done using the R package edgeR. Isoform quantification was performed using the cufflinks software (https://github.com/cole-trapnelllab/cufflinks) and the value for the same condition (WT DDX3X, RG-mutant DDX3X and Negative) were averaged. Transcripts with average FPKM of at least 0.1 in any condition were considered as expressed, and those mapping to protein coding or lncRNA from the gencode version 19 annotation (n = 24 590) were then used to assemble the expressed transcriptome fasta file for the following iCLAE analysis. Identical reads of iCLAE-seq libraries were removed and de-multiplexed according to their 4-nt pattern sequence at the 5 -end of each read (e.g. N3-GGTT-N2). Libraries were then pre-processed with cutadapt to remove 3 sequencing adapters and low quality sequencing tails. Highly repetitive reads, i.e. those having at least 10 equal nucleotides (e.g. A(58), T{10,n}, etc.), were removed and aligned to the hg19 version of the human genome using bwa mem (http://bio-bwa.sourceforge.net/). After alignment, reads with mapping quality (MAPQ) < 10 were re-moved and those aligning to the same position while also having the same barcode were eliminated, as they constitute most likely polymerase chain reaction duplicates. Coverage files were calculated (bedtools), regions with signal above 10 read counts extracted and intervals closer than 30 nt were merged into a single peak region. Merged peak regions less than 30 nt in width were removed. Next, only reads aligning with MAPQ ≥ 10 with expressed transcripts (bwa-hg19 aligned bam files) were further mapped to the expressed transcriptome RSEM (https://github.com/ deweylab/RSEM). Coverage transcript files were calculated and normalized for the total estimated count in each iCLAE library, and peaks were called. One hundred base pairs flanking the middle of a peak were considered as binding regions of peaks below 100 nt and sequences from these regions were extracted with bedtools. UTRs and coding sequence (CDS) analyses was performed by considering the same gencode version 19 annotation file as described above (https://www.gencodegenes.org/releases/19.html). Fold enrichment analysis was performed by randomly shuffling of peaks throughout expressed transcripts (bedtools shuffle). To describe the overlap with published datasets fold enrichment was calculated similarly. G4 motif analysis was performed by considering the following regular expression G 2+ N 1-12 G 2+ N 1-12 G 2+ N 1-12 G 2+, summarized as (G2-L12) 4 . Identification of cytoplasmic NRAS rG4-interacting proteins We applied an unbiased proteomics approach to identify cytosolic proteins that interact with the NRAS 5 -UTR rG4 structure. Biotinylated oligonucleotides containing the rG4 sequence found in the 5 -UTR of NRAS were folded into a rG4 structure (see 'Materials and Methods' section). Folded rG4 and control oligonucleotides ( Figure 1B) were immobilized on streptavidin beads and used as baits for affinity enrichments (AEs) of proteins from HeLa cell cytosolic extracts ( Figure 1C) (25). To critically evaluate specific rG4 interactors, a G-to-A mutated NRAS sequence (mrG4) that is unable to fold in to a G4, a stem-loop-forming sequence (SL), and empty beads (beads i.e. no oligonucleotide) were used in independent AEs ( Figure 1B). The integrity of rG4 formation and the failure to form a G4 structure in the mutated control was confirmed in vitro using circular dichroism spectroscopy (CD) and UV thermal melting analysis (Supplementary Figure S1). Using an anti-DHX36 antibody, we confirmed strong enrichment of DHX36, a wellknown rG4 interactor, in the NRAS rG4 (90%) AEs compared to the mrG4 (3.3%), SL (3.6%) and bead (2.3%) controls, which validated our approach ( Figure 1D). Intriguingly, assessment of selected DHX helicase family members using antibodies recognizing endogenous proteins showed that the NRAS rG4 interacted selectively with DHX9 (94%) compared to controls (mrG4 3.4%, SL 0%, beads 1.5%) but did not enrich for DHX30 or DHX29 ( Figure 1E). These data suggest high specificity and selectivity of our assay. Next, proteins bound to the rG4 oligonucleotides and control samples were subjected to on-bead tryptic digestion followed by LC-MSMS (26). We chose this qualitative approach due to its simplicity, cost effectiveness, time effectiveness and the relatively low requirement for bioinformatic analysis as compared to stable isotope labeling by amino acids in cell culture (SILAC) (29). Two biological replicates showed high reproducibility, as judged by dot plots comparing unique peptide counts (UPCs) from each replicate (r = 0.67 -0.85) (Supplementary Figure S2). Overall, 711 proteins interacted with RNA (rG4, mrG4 or SL but not beads). NRAS rG4-specific interactors were then identified by comparing UPCs of rG4 interactors to controls (CTRs: mrG4, SL and beads) using a linear fit model (Figure 2A and B; Supplementary Table S1). These were then ranked according to their false discovery rate (FDR) (significant FDR < 0.05, column 'FDR.G4 vs CTRs' Supplementary Table S1) which revealed 80 significant rG4 interactors. A more refined list was curated by only considering proteins with 6 or more UPCs, resulting in 35 rG4specific proteins (column 'G4.av ≥ 6' Supplementary Table S2). These strict selection criteria identified novel and previously characterized rG4-interacting proteins, including DHX36 (ranked 5) and DHX9 (ranked 31). Functional relationships within the primary list of 80 NRAS rG4 interactors were determined using Cytoscape, which generates a network of physical relationships (edges) between the interactors (nodes) (28,30) and their probability of interaction with the NRAS rG4 bait (Supplementary Tables S1 and 2; Figure 2C). Known and predicted candidate RBPs (green and orange diamonds, respectively, Figure 2C) were then overlaid on to this network (31,32). Notably, the majority of NRAS rG4 interactors have previously been annotated as RBPs (48 proteins, 60%). However, 32 proteins (orange circles) were not known as RBPs. Highly connected nodes were identified computationally (33), which revealed several interconnected complexes such as the HRNPH1/DDX5/DDX17 complex previously described to influence rG4-dependent splicing (34). Another complex not previously linked to rG4-mediated mechanisms was the eIF4E2/GIGYF1/GIGYF2 complex, which has a role in the negative regulation of translation initiation during development (35). Gene Ontology enrichment analysis was then performed to test which functional categories were over-represented in the rG4 interactor dataset (36) ( Figure 2D and Supplementary Figure S3). Of 80 high-confidence rG4 binders, 55 were assigned to specific terms/pathways, including splicing (19 proteins), purine NTP-dependent helicase activity (10 proteins), regulation of splicing (6 proteins) and RNA biogenesis (polyadenylation, cleavage and 3 -end processing, 7 proteins). Strikingly, rRNA base methylation was the most significantly enriched term, with NSUN5, an rRNA methylase (37), being ranked the fourth most significant interactor (Supplementary Table S2). Overall, our approach has identified new rG4-interacting proteins giving insights into previously unknown and unexpected functions for rG4s and their binding partners. Validation of identified NRAS rG4 interactors To validate the rG4-binding proteins identified, selected proteins were epitope-tagged with either N-terminal V5 (NV5) or C-terminal MYC (CMYC) and AEs were performed with rG4 or control baits followed by Wes Sim-ple Western analysis to evaluate rG4-protein interactions. Candidates were chosen based on the ranking of the proteins (Supplementary Table S2) and potential links to G4-mediated control mechanisms. Hence, cytoplasmic actin/tubulin transport affiliated proteins (kinesins KIF22 and KIF23), or kinases (MARK3, CDK12), or NFX1, which is a general nuclear-cytoplasmic RNA export factor (38), were excluded for this study. For similar reasons, the GIGYF1/GIGYF2/eIF4E2 complex, which inhibits translation initiation, was not studied (35). We focused on DDX17 and DDX5, each of which has a reported G4-relevant role in splicing (34), but their role in post-transcriptional control is not well explored. DDX3X is a DEAD box helicase related to DDX17 and DDX5 but there is no previous report of a DDX3X-rG4 interaction. Further candidates selected for validation were FXR1 and FXR2, which are homologs of the fragile X mental retardation protein (FMRP) that is known to bind to G4s (39). While the expression of several tagged proteins could be confirmed ( Figure 3A), RBM6 (Rank 1) and PRRC2B (Rank 2) could not be transiently expressed (data not shown). Endogenous DHX36 binding to the NRAS rG4 but not to control RNAs was used as a positive control ( Figure 3B and C). As endogenous DHX29 did not interact with any baits (Figure 1E), tagged DHX29 was used as a negative control and showed no binding confirming that the epitope tag does not independently interact with oligonucleotide baits ( Figure 3B). Using this approach, we confirmed GRSF1, NSUN5 and FXR2 as rG4-binding proteins that had no or minimal binding to mrG4, SL or B controls ( Figure 3C). Likewise, we determined the DEAD box helicase DDX3X as a novel rG4 interactor and that the DDX5 and DDX17 helicases were also specifically enriched by the NRAS rG4. Previously, eIF4AI, another DEAD box RNA helicase pivotal for translational initiation, was shown to unwind rG4 structures (19). In our AE-LC-MSMS experiments eIF4AI was not a significant interactor (ranked position 1102, Supplementary Table S1) and tagged eIF4AI expression did not reveal any interaction with the NRAS rG4 bait ( Figure 3B). eIF4AIII, another helicase closely related to eIF4AI (40), but not known to interact with rG4s, also showed lower binding to NRAS rG4 as compared to DHX36 ( Figure 3B), despite being ranked 32 in the high confidence interactors. Together, our experiments confirmed specific interaction of the NRAS rG4 with several identified high-confidence interactors. Differential selectivity for rG4 structures by NRAS rG4binding proteins RBPs can bind several mRNA targets and, in some cases, can interact with DNA through cytoplasmic-nuclear shuttling to execute different functions (1,41). We therefore explored the binding selectivity of identified NRAS rG4interacting proteins for the 5 -UTR rG4 (BCL2) and for DNA G4 versions of the NRAS and BCL2 sequences. The folding of oligonucleotides into G4s was confirmed by CD spectroscopy and UV thermal melting spectroscopy (Supplementary Figure S1). Proteins were affinity-enriched with either RNA or DNA oligonucleotides containing either the Figure S2 and Table S2). Each dot represents one protein. The X-axis shows the average of peptide (or spectral) counts for mrG4, SL and beads combined (six replicates). The Y-axis represents the average of peptide (or spectral) counts for NRAs rG4 (two replicates). Proteins in red are significantly enriched. (B) Overview of filtering parameters to identify high-confidence interactors; false discovery rate (FDR) (C) Cytoscape analysis of 80 high-confidence NRAS rG4 interacting proteins. Green diamonds represent previously identified mRNA binding proteins, dark orange diamonds represent candidate mRNA binding proteins and light orange circles represent novel rG4 interactors not previously known as a mRNA-binding proteins. The width of orange lines (edges) describes the probability ('FDR.G4 vs rest' Supplementary Table S2, NRAS or BCL2 G4 structures ( Figure 4A). Endogenous DHX36 served as a positive control. DHX36 exhibited an apparent preference for NRAS over BCL2 oligos. When RNA or DNA forms of NRAS or BCL2 were compared, RNA oligos appeared to preferentially bind DHX36 (Figure 4B). Each of the affinity-tagged NRAS rG4-interacting proteins, GRSF1, NSUN5, DDX3X, DDX17 or FXR2 showed an individual qualitative preference for different G4s ( Figure 4B), with a general preference for rG4s over DNA G4s and for NRAS over BCL2. Notably, NSUN5 appeared to show significant selectivity for the NRAS rG4. We next evaluated endogenous rG4-protein interactions by immuno-detection using specific antibodies (Figure 4C). The binding specificity of tagged DDX3X (Figure 4B) was accurately recapitulated by the endogenous DDX3X protein ( Figure 4C). Results for endogenous and epitope-tagged DDX17 also suggest an interaction with rG4 structures. Our results show that endogenous DHX9, a known rG4 binder (13), preferentially interacted with the NRAS/BCL2 rG4s when compared to DNA G4s. Comparable to its tagged homolog FXR2, endogenous FXR1 was also seen to associate with rG4s but not DNA G4s, while GRSF1 binds all G4s tested. Endogenous DHX9 and DHX30 showed no evidence of G4 interaction ( Figure 4C). Together, these data indicate that identified rG4 interactors have a preference for selected rG4s over the corresponding DNA versions. Glycine-arginine domains are enriched in NRAS rG4 interactors GAR domains are comprised of RGG and/or RG repeats and are important features in rG4 binding (42). There- fore, we calculated the presence of di/tri-RGG and di/tri-RG motifs in 35 most significant NRAS G4 interactors described above. In total, 55.8% of the 35 NRAS rG4binding proteins contained di-RGG/tri-RG (38.2%) or di-RG (17.7%) domains. This contrasts with only 5.2% di-RGG/tri-RG or 8.0% di-RG motifs detected in the entirety of proteins identified by AE-LC-MSMS (NRAS rG4, mrG4, SL and beads) ( Figure 5A and B). Next, we explored whether binding of selected proteins, DDX3X, DDX5 and DDX17, ( Figure 5C) to the NRAS rG4 structure is dependent on GAR domains by mutating certain arginines in the RG/RGG domain to alanine (Supplementary Tables S1, 3 and 4). Affinity-tagged versions of WT and mutant proteins (mRG) were expressed in HeLa cells ( Figure 5D) and the binding to rG4 oligonucleotides and controls tested ( Figure 5E). RG/RGG domain mutation of DDX3X and DDX17 substantially abrogated binding to the NRAS rG4 bait ( Figure 5E). By contrast, mutation of the DDX5 GAR domain did not disrupt rG4 binding indicating that another domain in the protein must facilitate binding. The GAR domain in DDX3X mediates interaction with rG4containing mRNAs in cells Our AE-LC-MSMS experiments revealed DDX3X as a new rG4-interacting protein. DDX3X is implicated in several aspects of RNA biology and mutations in DDX3X are linked to tumorigenesis, especially medulloblastoma (43,44). Thus, we aimed to identify whether DDX3X interacts specifically with endogenous transcripts containing rG4s. Importantly, in contrast to earlier individualnucleotide resolution UV-crosslinking and immunoprecipitation (iCLIP) experiments (45,46), our approach was based on the antibody free Strep-tag--Streptactin system for AEs (see 'Materials and Methods' section). Furthermore, the method was adapted to enhance the recovery of rG4 targets by comparing WT DDX3X with the rG4-binding impaired RG-mutant together with protocol enhancements to recover G-rich rG4 motifs (see 'Materials and Methods' section). Hence, we performed individual-nucleotide resolution UV-crosslinking affinity enrichments (iCLAE) using HEK293 cells expressing Strep-tag/haemaglutinin (ST/HA)-tagged WT or RG-mutant DDX3X proteins at Simple Western analysis for expressed-tagged proteins were performed as described in Figures 1 and 3 with endogenous DHX36 used as a positive control. A total of 1 g of cytoplasmic cell lysates (L) loaded in GRSF1, NSUN5 and DDX3X panels while 5 g were loaded in the DDX17 and FXR2 panels. (C) as (B) but using antibodies against selected endogenous proteins. endogenous levels ( Figure 6A) (27,47). Next, cells were UV irradiated to cross-link RNAs to WT or RG-mutant DDX3X followed by isolation of cytoplasmic RNAprotein complexes. After RNAse treatment and RNA endlabeling, ST/HA-tag AEs recovered similar amounts of WT and RG-mutant DDX3X protein ( Figure 6B), but less RNA was obtained from the RG-mutant compared to WT DDX3X ( Figure 6C). iCLAE libraries were prepared using a lithium buffer for reverse transcription to prevent polymerase stalling at G-rich regions (5). To improve recovery of the reduced RNA binding by the RG-mutant, an increased number of cells was used as starting material in this case. No library was amplified from the 'beads only' control (Supplementary Figure S4). iCLAE and total RNA sequencing reads were aligned to the human genome (hg19) and peaks were called for regions with ≥ 10 reads with a maximum allowed gap of 30 nt (see 'Materials and Methods' section). Overall, 5443 WT DDX3X peaks were identified in two out of three biological replicates, which corresponds with previously published iCLIP datasets ( Supplementary Figure S5A-C). The majority of peaks (4457, 82%,) aligned within coding transcripts, 12% (660) to non-coding regions and 6% (328) to intergenic regions ( Figure 6D). Most noncoding peaks (55%) were annotated as pseudogenes including long non-coding (23%) and antisense RNAs (7%), while the remainder mapped to rRNA, snRNA and other miscellaneous RNAs ( Figure 6E). To rule out any bias from nonspecific binding due to transcript abundance, we evaluated the correlation between gene expression levels and iCLAE signal for WT DDX3X protein (Supplementary Figure S6A and B). There was little correlation (r = 0.29) between transcript levels and WT DDX3X binding. As the DDX3Xspecific iCLAE signal was primarily found in coding transcripts, we re-aligned the reads to the human coding transcriptome. This revealed that mutation of the GAR domain significantly altered the binding properties of DDX3X resulting in a reduced peak count (3446) with 48% overlap with the WT protein. (Figure 6F, see 'Materials and Methods' section and Supplementary Table S5). Of the 4110 WT DDX3X-specific peaks, 1697 were located within the 5 -UTRs (∼6-fold enrichment) and 1921 in coding exons (∼3-fold enrichment) ( Figure 6G). Furthermore, an altered binding site preference compared to WT DDX3X was de- RBM6 GRSF1 PRRC2B NSUN5 DHX36 KIF22 NXF1/TAP KIF23 GIGYF2 TJP1 MARK2 KIAA1429 1 2 3 4 5 6 7 8 9 10 11 12 13 tected when WT DDX3X and the RG-mutant were compared ( Figure 6H). To investigate potential binding targets, a motif analysis was performed in which 100 nt around the center of peaks, located within annotated transcripts, were scanned for the presence of the (G2-L12) 4 G4 motif ( Figure 6I). Strikingly, the G4 motif was found in 55% of unique WT DDX3X peaks. Even though more cells were used to obtain a library in the DDX3X RG-mutant condition, only 23% of peaks contained the G4 motif (P = 3.4e-112 with the Chi-square test for proportions; method prop.test() in the statistical software R). The impaired rG4-binding ability of the RGmutant DDX3X is consistent with our AE experiments that showed that the RG-mutant DDX3X protein is not captured by the NRAS rG4 bait ( Figure 5E). MEME motivebased sequence analysis revealed that DDX3X binding sites that do not contain an rG4 defined by (G2-L12) 4 still contained G-rich sequence motifs (motif 1, 4 and 5 in Supplementary Figure S7A). There is potential for these G-rich regions to form non-canonical G-quadruplexes. Interest-ingly, the non-rG4 binding RG-mutant shows enrichment for A-rich motifs (motive 2 and 6, Supplementary Figure S7A), perhaps reflecting the increased 3 -UTR binding of the DDX3X mutant ( Figure 6H). To reveal possible biological pathways regulated by DDX3X rG4-binding, we selected mRNAs with peaks in the top quartile containing a G4 (G2-L12) 4 motif with a logFC > 1 when compared to the mutant DDX3X iCLAE signal (P < 0.05). This generated a list of 104 DDX3X target mRNAs (Supplementary Table S6). When the binding of WT and RG-mutant DDX3X to 5 -UTRs of mRNAs of several cancer-related genes was compared, a clear reduction in signal could be seen in the mutant condition ( Figure 6J and Supplementary Figure S7B). Gene ontology enrichment analysis assigned 84 transcripts to specific terms/pathways including adenosine triphosphate maintenance and mitochondrial membrane integrity terms (Supplementary Figure S7C). It was evident that DDX3X binds to several mRNAs that encode components of the oxidative phosphorylation system, suggesting a role of DDX3X in the regulation of energy production in the cell (Supplementary Figure S7D). Overall, these findings suggest that WT DDX3X binds a subset of mRNA targets through rG4 recognition mediated by its GAR domain and that this interaction is notable for components of the oxidative phosphorylation machinery. DISCUSSION A greater understanding of the roles of rG4 structures in mRNA post-transcriptional control will be achieved through a comprehensive knowledge of associated proteins. Over 1500 human RBPs have been cataloged (31,48), with most having no assigned role. Affinity selection has defined sequence motifs for only ∼200 RBPs (49), so it remains a critical question whether RNA secondary structure or the sequence per se, is pivotal for RNA-RBP interaction. As a step towards this, we have developed an unbiased AE-LC-MSMS approach to identify cytoplasmic RBPs that interact with the NRAS 5 -UTR rG4 structure. The largest category of rG4 interactors consisted of proteins involved in RNA splicing and processing, followed by proteins involved in translation. Curiously, rRNA base methylation was revealed as a significant term (P-value < 0.0005), which may be important since FMRP binding sites are enriched in 6mA-methylation at rG4 motifs (50). Another example of a regulatory RBP-rG4 epigenetic interaction is seen with the polycomb repressive complex (PRC2), which recognizes rG4s in histone-associated RNAs to promote epigenetic silencing (51). It is noteworthy that we identified several helicases as significant rG4 interactors. This supports the view that there is a dynamic balance between forming and resolv-ing rG4 structures. It has been suggested that in cells rG4s are globally unfolded, which is mediated by rG4 interacting proteins (6). However, these conclusions are drawn from transcriptome-wide averaging and may obscure the dynamics of rG4 formation in individual transcripts. Within the helicases, eIFA4I was not observed as a specific rG4 binder ( Figure 3B and Supplementary Table S1). This contrasts with earlier work combining ribosome foot-printing with eIF4AI inhibition which suggested a link between rG4s and eIF4AI (19). A possible explanation is that rG4 association with eIF4AI can only be detected under conditions when eIF4AI is hampered by small molecules and is part of the translation initiation complex eIF4F (19,52). GAR domains are commonly found in RBPs including several known rG4-interacting proteins (39,(53)(54)(55). Here, we have extended the number of rG4 interactors that contain a GAR domain and we have confirmed that rG4binding was abrogated by mutation of this domain for DDX3X and DDX17. Our work and the work of others (56) has revealed several rG4-interacting proteins, including GRSF1 and NSUN5, that do not possess a GAR domain. This might point to the existence of two classes of rG4-interacting proteins, one dependent on the presence of the GAR domain and another class possessing alternative RNA-binding modes specialized for rG4 recognition. The main focus of our study was identification of rG4dependent mRNA binders in the cytoplasm. Hence, cytoplasmic extracts rather than total cell extracts were used. Indeed, we could determine several new rG4-binding proteins but also proteins that had been detected in rG4-dependent AEs from total cell extracts, such as GRSF1 and NSUN5 (56). While GRSF1 is a protein targeted to mitochondria, NSUN5 is considered a nuclear protein. Our targeted approach, with focus on cytoplasmic events, now suggests NSUN5 as a potential shuttling protein that might have roles in methylation of mRNAs in the cytoplasm. For future experiments, it is important to study localization of rG4binding proteins in regards to their molecular function. Unraveling the function of rG4 interactors also requires identification of their mRNA targets. We were particularly interested in rG4 structures as recognition elements for rG4s-binding proteins. To address this, we compared WT DDX3X to RG-mutated DDX3X iCLAE data. The analysis of DDX3X iCLAE experiment uses stringent log-fold change cut-off of one to generate Supplementary Table S6 which shows for the first time that DDX3X, an important cancer-related helicase, has a set of RNA targets that require an rG4 structure for recognition. However, using these stringent restrictions might prevent detection of other mRNA targets. For instance, the iCLAE experiment detects peaks over the NRAS 5 -UTR as shown in Figure 6J and listed in Supplementary Table S5 and a decrease in DDX3X binding upon mutation of the GAR domain is evident. Still, this decrease does not meet the stringent cut off log-fold change cut-off of one (Log2 fold for NRAS -0.65). It is noteworthy that we uncovered a cluster of DDX3X targets that encode proteins involved in the oxidative phosphorylation chain. Indeed, dysregulation of the synthesis of oxidative phosphorylation components has severe consequences and is linked to several diseases, including Hunt-ington's, Alzheimer's, Parkinson's disease (57) and cancer (58). In summary, we have identified new cytoplasmic RBPs that interact with the rG4 secondary structure. The majority of rG4 binders contain GAR domains and mutation of this domain in the clinically important rG4-interacting protein DDX3X hampered the interaction in vitro and in cells. Moreover, we discovered that DDX3X mRNA targets are significantly enriched in rG4s with most of the top 104 mRNAs encoding for essential components of the mitochondrial oxidative phosphorylation chain. The discovery of rG4-interacting proteins will enable future mechanistic studies of rG4 dynamics and function in the cell. DATA AVAILABILITY RNA-sequencing and iCLAE data have been deposited at Gene Expression Omnibus (GEO) (GSE106476). The AE-LC-MSMS data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD010860.
2018-10-05T01:42:49.004Z
2018-09-26T00:00:00.000
{ "year": 2018, "sha1": "5cdcc9d6303660ba61894a475a46b183f725f4c3", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/nar/article-pdf/46/21/11592/26901542/gky861.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0998acb0e313e341c3adf04d35fc794800cffb4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222079418
pes2o/s2orc
v3-fos-license
Cancer Stem Cells-Role in Oral Squamous Carcinoma-A Review of Literature Deepthi Sogasu1, Devaraj Ezhilarasan*2, Smiline Girija AS3 1Saveetha dental college and hospitals, Saveetha institute of medical and technical sciences, Saveetha University, Chennai-600077, Tamil Nadu, India 2Department of Pharmacology, Saveetha dental college and hospitals, Saveetha institute of medical and technical science, Saveetha University, Chennai 600077, Tamil Nadu, India 3Department of Microbiology, Saveetha dental college and hospitals, Saveetha institute of medical and technical sciences, Saveetha University, Chennai-600077, Tamil Nadu, India INTRODUCTION Stem cells refer to those cells having the unique ability to divide and differentiate into a variety of cells and form different organs. Cancer stem cells are similar to that of the normal physiologic stem cells by the fact that they can divide and form huge numbers of cells and differentiate. They form 1-2% of the tumour in cancer conditions. They have the properties of mutagenicity and tumorigenicity. On the surface of these cells, there are speci ic tumorigenic and non-tumorigenic surface markers that mediate their functions. The various hallmarks of cancer include activation of invasion and metastasis, enabling replicative immortality, induction of angiogenesis, and resistance to cell death. Cancer stem cells are known to showcase such properties. Oral squamous cell carcinomas are one of the most common heterogeneous cancers arising from the mucosal lining of the oral cavity. Oral cancer constitutes about 30% of cancers in India . The origin of these cancers is due to poor lifestyle habits such as smoking, betel quid chewing, and excess alcohol consumption. The approved treatment for OSCC as of 2006 is Cetuximab which is a targeted molecular therapy. It selectively acts on the epidermal growth factor receptor. It is the only approved molecular therapy for the treatment of OSCC. Cancer stem cells are found in the tumorous part of the body during the cancerous condition. The tumours can be CSC positive or CSC negative. If the tumour is CSC positive, the prognosis for that stage is relatively lower compared to the treatment of a CSC negative tumor. Thus, the CSCs play a major role in predicting the course of treatment of the OSCC and the prognosis. The CSC is essentially cancer stimulators. Thus speci ic targeted therapy against the surface markers is highly necessary. In a recent study, they were able to isolate CSC and CSC like cells from OSCC cell lines. It was suggested that CSC like cells possessed the reduced ability of cell proliferation. They were also able to identify the CSC marker CD133 which is responsible for uncontrolled proliferation. There was minimal to no trace of Ki67, which is a marker that suggests reduced drug sensitivity. In a recent 2019 study, Syringic acid (SA), known to possess antioxidant, hepatoprotective, and anti-cancer effects. This traditional medicine was tested to check the ef iciency of apoptosis on the oral squamous cancer cells through the mitochondrial pathway. The results of this study were positive and suggest a novel treatment method for OSCC (Ezhilarasan and Abijeth, 2020). It is proven that there is signi icant downregulation of Mir 21 and Mir-3 observed in the exosomes which have been isolated from the stem cell populations of OSCC tumors. This suggests that oral cancer cells have unique mi-RNA pro iling. A remarkable study by discovered that the BMI1 inhibitor has therapeutic effects in cisplatin-resistant tumours. This is possible as it can reduce metastasis which is initiated by the circulating CSCs. The BMI1 inhibitor eventually leads to the necrotic cell death of CSCs. Thus, it was concluded that BMI 1 targeted inhibition is a novel treatment method for oral squamous cell carcinomas the authors of a review published recently suggested that the CSCs have the ability of epithelial-mesenchymal transition which accelerates the metastasis. They also suggested that the molecular-level understanding of CSC can help produce vaccines for cancer (Gurel, 2019). This review is of importance currently due to the escalating number of patients succumbing to the harmful effects of cancer and in India, especially oral cancer. Finding a speci ic novel treatment and even possibly designing a vaccine against cancer can signi icantly reduce the frequency of the occurrence of cancer. A few challenges and dif iculties faced by the other authors include the lack of single and speci ic biomarkers. The biomarkers for CSC are so many that it is not possible to target all of them at the same time. The treatment also can work only if a narrowtargeted therapy is provided, so it is a daunting task to identify sensitive biomarkers for successful results correctly. This review also helps to understand CSCs, which provides an opportunity for creating unique treatment protocols. This research aims to help understand the role of cancer stem cells in oral squamous carcinomas. Cancer stem cells The cancer stem cells (CSC) are specialized cells that function on the principle that tumor growth is analogous to the renewal of healthy tissues. Cancer stem cells from a small population of dedicated stem cells. Many tumors harbour CSCs in dedicated niches, thus making the identi ication of such cells very dif icult. A recent study on gene-expression of cancer stem cell populations mediated a discovery of new prognostic biomarkers and pharmacological targets. Tumorigenic cancer stem cells have been observed to behave analogously to normal stem cells. The cancer stem cells possess the functional and phenotypic attributes similar to the normal physiologic cell from which they are derived. The main differentiating feature of the tumorigenic cancer stem cells and the physiologic stem cells are that they undergo a poorly regulated process of organogenesis. The importance of identi ication of the tumor subtypes is necessary as they possess the differential response to the anti-tumour drugs. This is possible via newer research avenues and advances made in the ield of clinical oncology. Oral squamous carcinoma It is considered as the sixth most common cancer, accounting for up to 2-4% of all malignancies worldwide. Malignancy of oral cancers is relatively rare in the European expanse of the world. But in India, especially in Chennai, room for an exception must be anticipated. It is suggested that approximately 40% of malignant neoplasms in males are oral, and about 9% for the same among females. Among oral cancers, it is estimated that about 94.5% of them constitute oral squamous cell cancer. It is usually diagnosed with poor prognosis. The diagnosis and prognosis of oral squamous cell cancer can be made by careful analysis of the human metabolome.The results of the study suggest that the diagnostic potential for OSCC can be measured by evaluating the upregulation of L-carnitine, lysine, 2-methylcitric acid, putrescine; 8-hydroxyadenine; 17-estradiol; 5,6-dihydrouridine; and MTA (Sridharan et al., 2017). A similar study suggested that saliva can be used as a biomarker for the positive diagnosis of oral squamous cell carcinoma (Umamaheswari, 2014). Speci ic markers include MMP-9; Chemerin is used for early diagnosis of oral squamous cell carcinoma. The tumor size is proportionate to the extent of metastasis in the lymph node. They are one of the most important predictors of the extent of cancer. Oral squamous carcinomas are usually associated and accompanied by perineural invasions making the prognosis of the same worse. Poor clinical outcomes for oral squamous carcinoma is closely associated with the presence of multifocal and Extratemporal PNI. The tumor in oral squamous carcinoma is known to secrete exosomes surrounding the extracellular environment. This promotes the horizontal transfer of bioactive molecules via mechanisms involving microRNA. One of the important considerations to be looked at after tumor ablation is the functional and reconstruction of the dental structures with adequate rehabilitation. The problem of the hour is the lack of awareness among dental students about the methods diagnosis and prevention of oral cancer. This result is suggested by two different studies, conducted in India and Nepal (Prenit and Bandana, 2018). Oral squamous carcinoma -normal propagation A signi icant correlation has been established with RNF 8, and the predictive features of the tumor such as tumor thickness, ECS and nodal stage, radioresistance is expressed with causes spread of OSCC. About 30% of OSCC is RNF 8 positive. An OSCC tumor that is RNF 8 negative results in a much better prognosis. Recently, a novel DNA damage response protein has been discovered against RNF 8. It integrates protein phosphorylation, ubiquitylation signalling and plays a major role in cellular response by inducing genotoxic stress. Another reason for the propagation of oral squamous cancer is oxidative stress due to the reactive oxygen species. Oxidative stress potentiates the metastasis of cancer. This is similar to the effect of ROS in chronic liver disease, too (Ezhilarasan, 2018). Application of cancer stem cells The presence of CSCs in the tumor will accelerate cancer progression and metastasis and cause a low proliferation rate and high drug resistance, thus evading treatment. The cancer stem cells being a minor constituent of the tumor, can differentiate into bulk tumor cells. The cancer stem cells can also metastasize and cause alteration of adjacent stromal cells. This eventually causes the evasion of conventional treatment therapies. Thus, leading to a poorer prognosis. The cancer stem cells possess features allowing its migration, invasion, and metastasis. Such features of the cancer stem cells interfere with the treatment process and eventually become treatment-resistant cells. One of the studies suggests that cancer stem cells can potentially be managed by bioenergetic signalling pathways involving fatty acid metabolism, glutamine metabolism, and the AKT-mTOR pathway (Chae and Kim, 2018). The cancer stem cells are very dangerous and dif icult to identify because they notoriously imitate the physiologic stem cells. One such case is that the cancer stem cells use the same signalling pathways that are found in normal stem cells which are the Wnt, Notch, and Hedgehog (Hh) pathway (Marimuthu, 2018). The main applications of cancer stem cells depend on two critical properties of the establishment and recurrence of cancerous tumors. Novel treatment using target therapy to treat cancers mainly works on the principle of intervening tumor progression and targeted inhibition of the cancer stem cells. Their self-renewal capacity and potential to differentiate is unlimited. They form heterogeneous populations of cancer cells. Novel therapeutic targets can be made for treatment by the prevention of tumor progression. The severity of cancer can be determined by evaluating the composition of CSCs within a tumor. The more stem cells, the larger is the tumor, and the poorer is the prognosis. Current treatment options The CSCs have components of the Renin-Angiotensin System, which combine with cancer stem cells which can be applied as a part of future cancer treatment. Octamer binding transcription 4 (OCT 4) and histone modi ication methods can be used to regulate the embryogenesis and pluripotency of CSC's. The "Two -hit" therapy involves the metabolic inhibitors which inhibit CSC propagation. Vitamin C is used as a target inhibitor of the glycolysis pathway, thus affecting CSC's (Satheesh et al., 2020). It has been suggested in a review article that Vitamin C is a powerful antioxidant in physiologic oral tissues. It has been noted that the pro-oxidant activity of Vitamin C is activated in pathological oral tissues. This brings a con lict on the extensive use of Vitamin C as a target inhibitor. According to a study performed by (Francesco, 2019) suggest that TPP derivatives are considered a "powerful" candidate to block CSCs (Francesco, 2019). A meta-analysis for the novel treatment for oral tongue squamous cell cancer suggests that prognosis is possible through certain indicators such as occult node positivity, expression of E-cadherin and assessment of MMP9 at ITF. These markers are especially useful for highrisk patients requiring invasive treatment strategies. Another prognostic marker for oral tongue squamous cell cancer is based on the extent of p53 expression. It is associated with tumor depth and aggressiveness. It has been proven that a patient who has underlying conditions such as diabetes mellitus, can aggravate the metastasis of their tongue squamous cell carcinoma. Thus, hyperglycemia potentiates propagation and metastasis of tongue squamous cell carcinoma. It can thus be useful to reduce the blood glucose level to prevent spread and metastasis of cancer. This can be achieved using natural bioactive compounds such as those from Caralluma Fimbriata (Ashwini et al., 2017). Natural extracts of neem are also known to have anticariogenic properties against some oral cancers. Cancer stem cells in all cancers Cancer stem cells can be found in tumors of various types of cancers. They have been isolated and identi ied in myeloid leukaemia and more commonly in solid tumors of brain and breast cancer. Tumorigenic cancer stem cells have been isolated in Glioblastoma multiforme and medulloblastoma. The further detailed study concluded that the cancer stem cells had been restricted to the CD133+ subpopulation. Some of the most accepted theories suggest that the CSC arise as a result of epigenetic and genetic alterations to these resident tissue stem cells. Self-renewal and differentiation capabilities reside within the subpopulation of tumor cells, termed cancer stem cells (CSCs). The remaining tumor cell population cannot initiate tumor development or support continued tumor growth and proliferation. Cancer is metastatic primarily due to the ability of such CSCs to proliferate. The main principle that governs the functions of CSCs is that the cancers are dysregulated tissue clones with continual, distinct subsets of cells. Cancer stem cells in oral squamous carcinoma Cancer stem cells are of many subpopulations in the OSCC tumors. They show signalling pathways that are very similar to physiologic stem cells -Wnt, Notch, and Hedgehog (Hh) pathway. The cancer stem cells have the property of chemoresistance and radioresistance, allowing it to evade treatment. Even if treatment in oral squamous cell carcinoma. The recurrence is a more dangerous and aggressive form of cancer that can be fatal. The recurrence rate is 32.7% of the cases. The recurrence time ranges from 2-96 months, with an average of 14 months. Thus, it is necessary irst to identify and then inhibit these cancer stem cells. To identify such stem cells, it is necessary to know the various biomarkers of the cancer stem cells. Thus, in a similar review, where the markers have been isolated by low cytometry. The CD-24 marker is known to promote tumor growth and facilitate angiogenesis. CD-29 is another biomarker which helps in tumor invasion, migration and metastasis of CSC (Moraes, 2017). CD-44 is closely associated with the general characteristics of cancer stem cells. CD-98 promotes the tumor generation and at high levels destroys DNA repair genes. CD-133 can demonstrate the properties of cancer stem cells, especially of oral squamous carcinoma. A translational regulator, Musashi -1, is considered as a marker of oral squamous cancer stem cells. It is known to be closely related to CD 133, indicating their utilitarian role in oral carcinogenesis (Jayanthi et al., 2020). Another common biomarker speci ic to oral squamous carcinoma is Nestin. Nestin aids in neovascularization and angiogenesis. It is considered as an early biomarker for oral squamous carcinoma. Prognosis of OSCC can be predicted by the CSC proportion, tumor size and stage by examining the primary patient tumor. For inhibiting the biomarkers, it has been found that SOX 2 is a CSC regulator, so by inhibiting SOX 2, we can control the proliferation of CSCs also. It has also been suggested that the upregulation of BAX and PARP cleavage causes the simultaneous downregulation of Bcl-xl. This principle can be applied for long term treatment of CSCs, where the majority of cells undergo apoptosis. Acacia catechu has been known to increase the expression of the bax and Bcl-x gene and induce anti-cancer properties in SCC-25 cells . Novel treatment options The main reason for seeking novel treatment options apart from the conventional cancer treatment options is that conventional methods such as chemotherapy, radiation and surgical intervention are unspeci ic. These methods subject the adjacent healthy cells to unwanted trauma. Hence, the necessity for newer speci ic treatment options arises. QKI is known to be a novel CSC inhibitor. It impairs multiple oral cancer stem cell properties via partial repression of SOX 2. Quinacrine based on gold hybrid nanoparticles has been used to inhibit DNA repair in cancer stem cells. Various stem cell markers such as Oct 4, SOX 2, Nestin, CD 44, can be suppressed by all-trans-retinoic acid. NK cell function has been observed to increase in function when cultured with CSC of oral squamous carcinoma. This principle can thus be used to prepare novel treatment. NK cells can be repeatedly transplanted to the site of the tumor, which causes the speci ic elimination of the oral squamous cancer cells. Histone modi ication which alters calcium regulation can control the signalling pathway of the CSC, which can be used as a novel treatment. TPP derivatives are considered powerful target inhibitors that block CSCs. Another novel treatment option to block the CSC is an iron chelator target therapy. The iron chelator is combined with chemotherapy which suppresses the stemness of the cancer stem cells. It can be used especially for oral squamous cell carcinoma. In silico and in-vitro trials prove that coumarin derivatives help in an intrinsic pathway mediated apoptosis. This was tested for human stomach cancer cells, but this can also be tested against human oral cancer cells (Perumalsamy, 2018). The newer drugs can also be modi ied and made into nanoparticles and liposomes. This allows for better drug delivery and allows for ef icient drug action. Among nanoparticles, it has been proven that selenium and zinc oxide nanoparticles are good chemotherapeutic agents (Menon, 2018). Target therapy to enhance pro-apoptotic agents is also considered as a novel treatment method for OSCC. CONCLUSIONS One of the main limitations of the study is that there is a variety of biomarkers for CSCs. It is extremely dif icult to individualize one speci ic biomarker for treatment to ful il the criteria of narrow range therapy to inhibit cancer stem cells. There is a risk of reoccurrence of cancer due to the progressive action of CSC and its metastasis. Thus, further research for speci ic target therapies is required. Understanding the mechanism of CSC will help develop novel treatment methods for cancer. Vaccine development based on principles of CSCs is underway and may eradicate cancer shortly. The study suggests that there is a signi icant role of cancer stem cells in oral squamous carcinoma.
2020-10-01T02:15:02.582Z
2020-09-09T00:00:00.000
{ "year": 2020, "sha1": "4b66cb1c6caa03e697298012362892a547ae9dc6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26452/ijrps.v11ispl3.2906", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4b66cb1c6caa03e697298012362892a547ae9dc6", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
131877017
pes2o/s2orc
v3-fos-license
Folic Acid Intake, Fetal Brain Growth, and Maternal Smoking in Pregnancy: A Randomized Controlled Trial ABSTRACT Background Folic acid supplementation during pregnancy plays an important role in fetal growth and development. To our knowledge, no experimental study has examined the effect of folic acid on fetal brain growth in women who smoke cigarettes during pregnancy. Objectives The aim of this study was to investigate the efficacy of higher-dose compared with standard-dose folic acid supplementation on prenatal fetal brain growth, measured by head circumference, brain weight, and brain-body weight ratio (BBR). Design In this randomly assigned, double-blind, controlled clinical trial, we recruited 345 smoking pregnant women attending a community health center in Tampa, FL between 2010 and 2014. Participants were randomly assigned in a 1:1 ratio to receive either 0.8 mg folic acid/d (standard of care at the study center) or 4 mg folic acid/d (higher strength). Participants were also enrolled in a smoking cessation program. A 2-level linear growth model was used to assess treatment effect and factors that predict intrauterine growth in head circumference over time. Multiple linear regression analyses were conducted to estimate the effect of higher-strength folic acid on head circumference at birth, fetal brain weight, and fetal BBRs. Results Mothers who received the higher dose of folic acid had infants with a 1.18 mm larger mean head circumference compared with infants born to mothers who received the standard dose, but this difference was not statistically significant (P = 0.2762). Higher-dose folic acid also had no significant effect on brain weight. The BBR of infants of mothers who received higher-dose folic acid was, however, 0.33 percentage points lower than that for infants of mothers who received the standard dose of folic acid (P = 0.044). Conclusions Infants of smokers in pregnancy may benefit from higher-strength maternal folic acid supplementation. We noted a decrease in the proportion of infants with impaired BBR among those on higher-dose folic acid. This trial was registered at clinicaltrials.gov as NCT01248260. Introduction The role of maternal smoking on fetal development is well documented (1)(2)(3). Nicotine readily crosses the placenta into the fetal serum and brain, (4) and has been found to be neurotoxic (5). Maternal tobacco exposure is associated with reduced head circumference (1,2), altered indicators of infant body proportionality, such as brain-body weight ratio (BBR) (3), and infant neurocognitive developmental problems (6,7). Although smoking cessation before midpregnancy may mitigate smoking-related deficits in infant head circumference and BBR (2, 3), many mothers find it difficult to quit, thereby exposing their unborn babies to the harmful effects of tobacco. A likely mechanism of action of maternal smoking on adverse fetal outcomes is through a reduction in the concentration and activity of maternal folic acid reserve (8,9). Smokers are more likely to consume diets deficient in folate and have impaired folate metabolism (9)(10)(11). Low maternal folate concentrations reduce the bioavailability of folate to the developing fetus, resulting in impaired growth (12). Previous studies suggest that prenatal folate deficiency might have an adverse effect on global fetal brain growth (13), and folate supplementation in early pregnancy may increase head circumference at birth (13) and prevent neurodevelopmental disorders in offspring (14,15). The role of folic acid supplementation and folate concentrations in fetal head growth among women of reproductive age is also well documented (13,16). However, there is a dearth of research on the effect of folic acid supplementation among high-risk subgroups, such as women who smoke during pregnancy. To our knowledge, no experimental study has been conducted to examine this relation. In this study, we investigate the efficacy of high-strength compared with standard-dose folic acid on increasing prenatal brain growth. Study outcomes include head circumference, brain weight, and BBR. Head circumference is a noninvasive proxy for fetal brain growth and development (16), and is closely correlated to brain volume (17). We hypothesize that among mothers who smoke cigarettes in pregnancy, higher-dose folic acid treatment will be associated with increased fetal brain growth. Trial population and design This study is a randomized, double-blind, controlled clinical trial primarily conducted to assess the effect of high-strength folic acid on fetal body and brain size. Screening took place among 860 pregnant smokers; 345 met the eligibility criteria and were enrolled in the study. The eligible participants were randomly assigned in a 1:1 ratio to receive either 0.8 mg folic acid/d (standard of care at the study site) or 4 mg folic acid /d (higher-dose treatment). The folic acid tablets had the same color, shape, size, and were packed in identically coded bottles. Allocation concealment was achieved through the pharmacy. Based on the randomization card received from the patient, the pharmacist gave a bottle that contained either a 0.8 or 4 mg folic acid tablet. The code on the bottle was the only distinguishable feature which was used to determine what the bottle contained, and this was concealed with patient ID labels. The physicians, ultrasound technicians, laboratory staff, study investigators, and study participants were all blinded to the treatment groups. Randomization was performed through a computergenerated randomization schedule with the use of a permuted block design, with a block size of 12. This type of design was chosen to allow balance in the number of participants in each group at the end of the clinical trial. Participants were recruited between 2010 and 2014 at the Genesis Clinic, Tampa, FL, a community health center affiliated with the Department of Obstetrics and Gynecology of the University of South Florida. Women were eligible to participate in the study if they fulfilled the following criteria: 1) current smokers, 2) aged 18-44 y; 3) at <21 wk gestation, as confirmed by last menstrual period; and 4) residents of Tampa, FL, or a surrounding area, to facilitate follow-up and reduce attrition. To identify pregnant women who smoked at baseline, screening for cotinine-a biological marker for nicotine-in saliva was performed. Women with detectable cotinine concentrations of ≥1 on a NicAlert test strip were confirmed eligible for the study. Women receiving chronic blood transfusion may have inaccurate measures of red blood cell (RBC) folate concentrations, due to transfused RBCs. Additionally, patients treated with anticonvulsants may have folate deficiency as a side effect of the medication (18). Participants were therefore excluded if there was evidence of chronic blood transfusion and generalized seizure disorder treated with anticonvulsant medication. Fetal ultrasound assessments, study questionnaires, and salivary cotinine laboratory assessments were completed during the first 3 study visits. Participants were questioned regarding tablet intake and compliance, and the reported number of tablets consumed was crosschecked by observing the remaining number of tablets in each bottle. Women who adhered during their second and third visits were categorized as compliant, whereas those who did not adhere during ≥1 of the visits were grouped as noncompliant. At delivery, cotinine concentration was further evaluated, and fetal body measurements were taken. Written informed consent was obtained from all the participating women before trial-related procedures were initiated. Informed consent was available in both English and Spanish. Monetary compensation per clinical visit was estimated based on the median hourly wage for the community and the travel costs. For the total of 4 visits and completion of a dietary history questionnaire, a total of US$90 was reimbursed per participant as follows: first visit US$20; second visit US$20; third visit US$20; at delivery US$20; and dietary history questionnaire completion US$10. Smoking cessation program All women enrolled in the trial signed a contract stating they agreed to commit to quitting smoking and attended ≥1 smoking cessation session at the study site for smoking mothers, or called the Florida Quitline, a toll-free telephone-based tobacco use cessation service. The counseling sessions focused on the following: 1) receipt of self-help materials and how to make a quit attempt; 2) effects of secondhand smoke, partners who smoke, and tips on how to establish smoke-free homes and cars; 3) stress management and benefits of not smoking; and 4) prevention of smoking relapse. Women who persistently tested positive to salivary cotinine were referred by study personnel to Florida's Quitline. Additionally, all women were linked to community smoking cessation services, such as the Hillsborough County Healthy Start program, a free, voluntary, and intensive smoking cessation program. Endpoint measures The outcomes included prenatal fetal brain growth, defined as 1) rates of growth in intrauterine ultrasound measure of head circumference (growth velocity); and 2) 2 different measures of cumulative head circumference growth (fetal brain weight and fetal BBR). Serial ultrasound measurements of head circumference were obtained during 3 study visits according to standardized ultrasound procedures. Intrauterine head circumference was measured to the nearest millimeter on a transverse view of the fetal head in a plane showing both thalami and the third ventricle. The second outcome, fetal estimated brain weight, was derived from the National Institute of Neurological Disorders CURRENT (19). Head circumference at birth was measured to the nearest centimeter with a measuring tape. The third outcome, the BBR, is the proportion of the body weight that resides in the brain (20,21). The BBR was calculated as 100× the ratio of the infant's estimated brain weight to its birth weight and is expressed as 100 × [0.037 × head circumference (cm) 2.57 ]/birth weight (g). A high BBR indicates a higher percentage of birth weight residing in the brain, whereas a lower BBR suggests a lower fraction of birth weight residing in the brain (3). The typical values for healthy infants are estimated to be 9-10% (17). Furthermore, higher BBR indicates larger brain weight for a given head circumference and is associated with small-for-gestational-age (SGA) infants (17). Covariates Based on prior knowledge, several maternal and infant characteristics were considered as potential confounding factors due to their demonstrated association with fetal brain development. These include sociodemographic factors (e.g., maternal age, race, marital status, and insurance status), lifestyle factors (cotinine concentration, alcohol consumption, dietary folate, and maternal BMI), perinatal factors (e.g., maternal depression, stress, gestational age), and maternal chronic diseases, such as hypertension and diabetes. Gestational age at delivery, a measure of duration of the pregnancy in weeks, was based on dating ultrasound at first prenatal visit and the date of delivery of the baby. Sociodemographic and lifestyle factors were extracted from participants' medical records, specifically the American College of Obstetrics and Gynecology form. Dietary folate was assessed through the use of a proxy, maternal red blood cell (RBC) folate concentration at study baseline. RBC folate was measured by ELISA. This was the preferred folate biomarker because of its advantages of long-term stability and reduced susceptibility to sudden changes in diet (22). Depression was assessed with the use of the Edinburgh Postnatal Depression Scale. The Edinburgh Postnatal Depression Scale is a validated tool for assessing antenatal and postnatal depressive symptoms (23). Stress was measured with the use of the Perceived Stress Scale 14, which is a validated tool for comparisons between people in study samples (24). Cotinine was measured by a test of maternal saliva for the presence of cotinine with NicAlert (Jant Pharmacal Corporation), a rapid semiquantitative screening test. Salivary cotinine analysis is the most sensitive and specific of the 3 types of cotinine measures (25). Statistical analysis We assessed differences in the baseline sociodemographic, lifestyle, and health-related characteristics of study participants by treatment group. Compliance rate between both groups was also examined. Means of continuous data were compared through the use of independent t tests or ANOVA, whereas categoric variables were compared with the chisquare test. Based on the intention-to-treat population, birth outcomes for all study participants were described in terms of frequencies and percentages. All primary analyses were conducted on a modified intention-to-treat basis and included participants who completed the trial with an observed endpoint, irrespective of compliance to protocol. Based on a power of 80%, a type 1 error rate of 5%, and a 50% reduction in the rate of loss of fetal brain growth, the estimated sample size required to detect a difference in the efficacy of high-strength compared with standard-dose folic acid on enhancing brain development was a total of 100 participants. Growth potentials for fetuses with congenital anomalies or those from multiple gestations may not be comparable to singleton pregnancies without abnormalities. Analyses therefore excluded all pregnancies that ended in a fetal loss (abortions, fetal demise, stillbirths, miscarriages), congenital anomalies, or multiple births. Because there were 4 repeated measures of cotinine concentration from participants during the trial, we decided to identify the unique paths of cotinine concentrations throughout the pregnancy through the use of a group-based latent class trajectory model. This approach offers a data-driven method to identify distinct individual patterns of a variable of interest and the corresponding probability of falling into each pattern, also known as the posterior probability. Trajectory analyses were conducted, and participants were then classified based on their highest posterior probability. All statistical tests were 2-sided, with an α level set at P < 0.05. We used SAS version 9.4 for all analyses. Fetal brain growth trajectory The primary outcome was the rate of intrauterine growth in head circumference from the beginning of the second trimester of pregnancy until delivery. A 2-level linear growth model (multilevel model) was used to assess treatment effect and factors that predict intrauterine growth in head circumference over time. Multilevel modeling accounts for the dependency in observations when data have a nested, multilevel structure. In this study, the level 1 relation between gestational age and fetal head circumference was modeled individually for each participant, and the average relation across participants was reported. level 2 variables were then sequentially included in the model to account for differences between babies in average fetal brain growth. These subjectlevel covariates comprised maternal BMI, cotinine concentrations, and fetal sex. The primary exposure (treatment arm) was also included. The level 1 variable, repeated measures of gestational age, was centered at 13 wk because this period signifies the beginning of the second trimester of gestation. Furthermore, no ultrasound measurements of head circumference were recorded before 13 wk gestation. The treatment effect, i.e., the difference between the 2 groups, was determined from the multilevel models. In the first model, the unconditional growth in head circumference was modeled as a function of gestational age. The intercept and slopes were fit as random effects, which varied across fetuses. An unstructured covariance matrix had the best fit and was utilized in modeling. Variables that were either associated with folic acid folate treatment or head circumference at birth in our bivariate analysis or the literature were included in our model. Based on the log-likelihood ratio, only 3 of the likely confounding variables improved the model fit, namely cotinine group, sex, and BMI. In addition to the unconditional model (model 1), results from the following models were also reported: (model 2) model 1 + treatment; (model 3) model 2 + cotinine group trajectory; (model 4) model 3 + fetal gender; and (model 5) model 4 + maternal BMI. Model 5 had the best fit. Thus, the overall treatment effect and other fixed-and random-effects parameter estimates were reported from this model. Cumulative growth in the fetal brain We examined the effect of the intervention on fetal brain weight and fetal BBRs at birth. Multiple linear regression analyses were conducted separately for the outcomes. To account for potential confounders, we controlled for race because of the imbalance across treatment groups after randomization. Cotinine group trajectory was also included in the model to account for possible changes in cotinine concentrations during the course of pregnancy. Mean differences in the outcomes and their corresponding standard errors were reported. Figure 1 describes participant enrollment and follow-up. A total of 345 smoking pregnant women were enrolled in the trial; 171 women were randomly assigned to the high-dose folic acid group and 174 to the standard-dose folic acid group. Of these women, 258 (74.8%) had ≥2 ultrasound measurements of head circumference, and 215 (62.3%) had outcome data on head circumference at delivery. These numbers represent the count of participants included in the modified intention-to-treat analysis of brain growth trajectory and cumulative brain growth, respectively. Study participants Sociodemographic, lifestyle, and health-related characteristics of study participants by treatment arm are shown in Table 1. The baseline characteristics of the participants in the 2 groups were similar, except for race. Women of "other" races, including Hispanic women, were more likely to be assigned to the standard of care. The mean ± SD age of participants was 26.7 ± 5.6 y. The mean gestational age at enrollment and the mean baseline folate concentration were 12.3 ± 3.9 wk and 718.5 ± 187.0 ng/mL, respectively. The trial population predominantly comprised single/divorced women (90.1%) and without insurance or on public insurance plans (95.1%). Approximately two-thirds of the population was either overweight or obese (67.7%). There were no statistical differences in the rate of excluded outcomes due to fetal loss by treatment group. Compliance rate was also not significantly different in the higher-dose folic acid group (87.7%) compared with the standard of care group (92.7%) (P = 0.1741). Figure 2 illustrates the cotinine trajectories for trial participants. We discriminated 3 distinct groups of maternal cotinine velocities: one with a consistently high concentration of cotinine, another with a low cotinine concentration, and a third group with moderate cotinine concentration but a marked decline towards the end of gestation. Almost half of the trial participants (48.3%) belonged to the group with a consistently low cotinine concentration. At the end of the study, only 3.0% of the women with cotinine data at delivery (n = 8) had stopped smoking (zero cotinine concentration). Table 2 shows the results of multilevel linear growth models for the longitudinally measured fetal head circumference based on the modified intention-to-treat population. In the adjusted model, the mean head circumference at 13 wk (due to centering at 13 wk) for all fetuses was 109.2 mm (P < 0.001). This time corresponds to the beginning of the second trimester of pregnancy. The average rate of growth in head circumference per week starting at the onset of the second trimester of gestation was 9.66 mm, and this finding was statistically significant (P < 0.001). Although infants of participants who received the higher-strength dose of folic acid had a 1.18 mm larger head circumference than the infants of those who received the standard folic acid dose, this difference was not statistically significant (P = 0.28). The interindividual variance in the outcome was 67.7 mm (P < 0.001), indicating that the initial head circumference at a gestational age of 13 wk across fetuses was significantly different. This finding was mainly explained by differences in maternal BMI and fetal sex. The intraindividual variance was 70.1 mm and was statistically significant (P < 0.001), indicating significant variability in brain growth over time across children. Maternal cotinine group concentration accounted mainly for this difference. The cotinine group trajectory, however, had no significant effect on the rate of brain growth. Fetuses with a smaller head circumference at the beginning of the second trimester had a faster rate of growth than those with a larger head circumference (τ 10 = −6.00). Compared with male fetuses, female fetuses had a 3.2 mm slower growth rate in head circumference (P = 0.003). Cumulative growth of the fetal brain The mean ± SD head circumference of babies at birth was 33.8 ± 2.0 cm. Table 3 shows multivariable linear regression results of the effect of folic acid treatment on fetal brain weight at birth. Higher-dose folic acid had no significant effect on brain weight in our participants (mean ± SE difference: 6.90 ± 5.85 g; P = 0.24). Compared with infants of mothers in the low-cotinine groups, infants of mothers in the high-or moderate-cotinine groups had ∼30 g lower brain weight (P < 0.001). Infants of black mothers also had smaller brain weight at birth than infants of white mothers (P = 0.01). The fetal BBR for infants of the study participants ranged from 7.4% to 13.9%. The BBR of infants of mothers who received high-dose folic acid treatment was 0.33 percentage points lower than for infants of mothers who received the standard dose of folic acid (P = 0.04; Table 4). Unlike other measures of cumulative brain growth, high cotinine group and black race were associated with higher BBR. Compared with women who had low cotinine concentrations, mothers with high cotinine concentrations had 0.93% higher BBRs (P < 0.001). Infants of black mothers had ∼0.5% higher BBRs than infants of white mothers (P = 0.007). Discussion Our randomized clinical trial compared the efficacy of a combination of higher-strength folic acid supplementation versus standard-ofcare folic acid dose on prenatal brain growth among smokers in pregnancy. Higher-strength folic acid supplementation in combination with smoking cessation had no effect on intrauterine brain growth from the beginning of the second trimester of gestation through delivery. We observed a significant effect of maternal folic acid treatment on BBR, but no effects on brain weight at birth. The absence of a difference in the rate of intrauterine brain growth and brain weight at birth by trial arm might be explained by the initiation period of folic acid supplementation. The mean gestational age at study enrollment and commencement of supplementation was 12.3 wk, approximately the end of the first trimester of pregnancy. The infants of women on higher-strength folic acid experienced a 0.33 percentage point reduction in BBR compared with their counterparts on the standard treatment (P = 0.04). This finding suggests that folic acid does not favor the fetal brain over fetal body growth. A high BBR signifies a larger brain weight for a given head circumference, and this is commonly observed in SGA infants (17) and infants with intrauterine growth restriction (25). Therefore, the decrease in BBR found in this trial correlates with a lower risk of SGA birth among the higher-dose folic acid arm (26). Harel et al. (27) reported that a higher BBR was associated with a more severe intrauterine growth restriction process and a greater risk that the fetal brain would be affected (27). Our finding has potential clinical implications: it demonstrates that high-dose folic acid will be beneficial in terms of optimal brain growth development among infants of smokers. Significant relations between cotinine trajectory group and the cumulative measures of brain growth were also observed. A doseresponse relation was observed for all outcomes. Higher cotinine concentration was associated with worse outcomes, i.e., reduced brain weight and higher BBR. Lindley et al. (3) found similar effects of smoking on head circumference and BBR. In their observational study, nonsmokers were compared with mothers who stopped smoking by 32 wk of gestation, light smokers who continued to smoke, and heavy smokers who continued to smoke during pregnancy. A dose-response gradient was observed with these self-reported smoking levels. It is well documented that maternal smoking affects fetal brain development and results in neurocognitive issues, such as deficits in intelligence quotient in the offspring (28,29). Because infants born to mothers with lower cotinine concentrations had a reduced risk of suboptimal fetal brain growth than those born to mothers with higher concentrations, having women quit smoking will have significant implications for brain development. It is noteworthy that despite being enrolled in smoking cessation programs, only 3% of our participants quit smoking. Other findings in this trial are the increased risk of adverse cumulative brain growth outcomes among blacks. Our study confirms previous reports of racial disparities in birth outcomes, with black mothers having worse birth outcomes, including infants with lower birth weight and smaller head circumference, than white mothers (30). Our study has some limitations. We lost a relatively high proportion of our participants to follow-up. For instance, in our multilevel modeling, we had data on only 82.7% (n = 258) of the 319 women eligible to be included in the modified intention-to-treat analysis. There were no significant differences in demographic characteristics between the participants lost to follow-up and those who remained in the study. Therefore, we do not expect loss to follow-up to bias our results. We did not directly assess or control for the effect of diet, a possible confounder in this study. However, there is a reduced likelihood of dietary differences by treatment arms because of randomization. Further, the baseline comparison of RBC folate, a proxy for longterm folate diet, supports this claim. As with most clinical trials, the generalizability of our findings may be an issue. Voluntary participants in studies can be different from nonparticipants. Our study sample also included low-income women with a high proportion of minorities, further limiting the generalizability of our findings. On the other hand, this is a study strength because minority populations are understudied in clinical trials. Therefore, our result can enrich the literature on the effect of folic acid in minorities. Another strength of our study is the use of a double-blind, randomized clinical trial, which allows for causal inference. We also conducted modified intention-to-treat analysis, and therefore our findings likely closely represent the effectiveness of higher-strength folic acid in improving prenatal brain growth under real-world conditions. To our knowledge, this is the first randomized controlled trial to report on the efficacy of high-strength folic acid in combination with enrollment in a smoking cessation program in preventing adverse fetal brain outcomes among smokers in pregnancy. The vulnerability of the developing fetal brain is dependent on whether an exposure or its active metabolites reach the developing nervous system and the period of exposure (29). In our trial, we cannot say with certainty that folic acid supplementation was commenced at the critical period of brain growth, and that folic acid reached the brain at the dose at which we supplemented. Future experimental studies should investigate the role of early folic acid supplementation among smokers, starting from before conception until delivery. Doing so may help in identification of the critical period of development associated with maximum folate-associated brain growth. It is also crucial to understand how folic acid supplementation relates to blood folate concentrations in the fetal brain. Blood folate concentrations, including RBC and serum folate, are better proxies for the assessment of folate status (31). These biomarkers are recommended for assessing folate bioavailability for optimal growth and development of neural cells. In conclusion, we demonstrated a reduction in BBR with the use of higher-strength folic acid initiated during early-mid pregnancy. However, no treatment effects were found on intrauterine brain growth rate and brain weight. Our findings show that smokers in pregnancy may benefit from folate supplementation in reducing the risk of having infants with impaired brain-body proportionality.
2019-04-26T13:36:11.925Z
2019-04-04T00:00:00.000
{ "year": 2019, "sha1": "8867060bfd8efbc8eb3dccb8007fadd64e217692", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/cdn/nzz025", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8867060bfd8efbc8eb3dccb8007fadd64e217692", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23227084
pes2o/s2orc
v3-fos-license
Human Pumilio Proteins Recruit Multiple Deadenylases to Efficiently Repress Messenger RNAs* Background: The mechanisms by which human PUF proteins repress target mRNAs remain unknown. Results: PUM1 and PUM2 reduce protein and mRNA levels of targets by recruiting the CNOT deadenylase complex and by a poly(A)-independent mechanism. Conclusion: PUMs employ deadenylation-dependent and -independent mechanisms of repression. Significance: Deadenylation is a conserved means of PUF repression but additional mechanism(s) contribute to mRNA regulation. PUF proteins are a conserved family of eukaryotic RNA-binding proteins that regulate specific mRNAs: they control many processes including stem cell proliferation, fertility, and memory formation. PUFs repress protein expression from their target mRNAs but the mechanism by which they do so remains unclear, especially for humans. Humans possess two PUF proteins, PUM1 and PUM2, which exhibit similar RNA binding specificities. Here we report new insights into their regulatory activities and mechanisms of action. We developed functional assays to measure sequence-specific repression by PUM1 and PUM2. Both robustly inhibit translation and promote mRNA degradation. Purified PUM complexes were found to contain subunits of the CCR4-NOT (CNOT) complex, which contains multiple enzymes that catalyze mRNA deadenylation. PUMs interact with the CNOT deadenylase subunits in vitro. We used three approaches to determine the importance of deadenylases for PUM repression. First, dominant-negative mutants of CNOT7 and CNOT8 reduced PUM repression. Second, RNA interference depletion of the deadenylases alleviated PUM repression. Third, the poly(A) tail was necessary for maximal PUM repression. These findings demonstrate a conserved mechanism of PUF-mediated repression via direct recruitment of the CCR4-POP2-NOT deadenylase leading to translational inhibition and mRNA degradation. A second, deadenylation independent mechanism was revealed by the finding that PUMs repress an mRNA that lacks a poly(A) tail. Thus, human PUMs are repressors capable of deadenylation-dependent and -independent modes of repression. Messenger RNAs (mRNAs) are subject to extensive regulation throughout their lifespan (1). Synthesis and processing events of precursor mRNAs in the nucleus are regulated to yield mature mRNAs. Once exported to the cytoplasm, translation and stability of mRNAs are controlled to ensure that the appropriate amount of encoded protein is produced at the proper time and cellular location. The discovery of factors and mechanisms responsible for gene regulation is crucial to deepening our understanding of how misregulation contributes to disease. PUF 3 (Pumilio and fem-3 binding factor) proteins are transacting factors that regulate mRNAs by binding specific sequences in 3Ј untranslated regions (3Ј UTR) (2). Members of the PUF family share a conserved RNA binding domain composed of eight ␣ helical repeats (3)(4)(5)(6)(7)(8). These PUF repeats adopt a crescent shape, whose concave side binds to specific singlestranded RNA sequences. Each PUF repeat recognizes a single ribonucleotide base, mediated by three RNA recognition amino acids, and these contacts dictate the RNA binding specificity of each individual PUF protein (7). Humans and other vertebrates possess two canonical PUF proteins, PUM1 and PUM2, collectively referred to as PUMs (9). PUMs share significant sequence similarity: amino acids outside of their RNA binding domains (RBD) share 75% identity, whereas those within are 91% identical (9,10). Both PUM1 and PUM2 bind with high affinity to the consensus sequence UGUANAUA, hereon referred to as a PUM response element (PRE) (7,(11)(12)(13). PUMs are widely expressed in tissues and cell types (9,14). Given their similar RNA binding specificities and broad expression, it is possible that PUMs compete for many of the same mRNAs, supported by identification of mRNAs that associate with PUMs (13). The mechanism(s) of mRNA regulation by human PUMs remains to be elucidated and a complete understanding of PUM repression will facilitate identification of biologically relevant target mRNAs. A repressive role for human PUMs is supported by several observations. Overexpression of PUM2 reduces expression of reporter genes (52) and overexpression of PUM together with a putative partner NANOS3 was reported to inhibit E2F3 expression (53). Another study reported that reduction of PUM1 by RNA interference stabilized several mRNAs (33). PUMs were reported to repress the mRNA encoding CDKN1B tumor suppressor (32) and, unique to this mRNA, PUM1 was postulated to license microRNA-mediated repression by disrupting basepairing between specific PUM and microRNA binding sites (32). The role of deadenylases in yeast PUF repression suggested that human deadenylases might serve as PUM co-repressors. Humans possess multiple orthologs of the Pop2p and Ccr4p deadenylase enzymes (47). The human CNOT7 and CNOT8 proteins are related to yeast Pop2p, whereas human CNOT6 and CNOT6L are orthologous to yeast Ccr4p (47, 54 -56). All four proteins have been reported to possess deadenylase activity (47,(57)(58)(59). Like their yeast counterparts, CNOT7 and CNOT8 form heterodimers with either CNOT6 or CNOT6L, and these pairs assemble with human orthologs of the yeast Not proteins to form large multisubunit complexes referred to as CCR4-NOT (CNOT) complexes (60 -62). In this report, we explore the activities of human PUM1 and PUM2. We show that both PUMs are potent repressors that inhibit protein expression and reduce mRNA levels. We then investigate the mechanism of repression and show that purified PUM complexes contain CNOT deadenylases. Two deadenylase subunits interact directly with the PUMs. In vivo, we find that deadenylases are important PUM co-repressors and the poly(A) tail is necessary for efficient repression. We also present evidence for a poly(A) independent mechanism of PUM repression. This research reveals two modes of PUM repression and thereby enhances our understanding of their regulatory functions to control important biological processes. EXPERIMENTAL PROCEDURES Plasmids-Renilla luciferase reporters (RnLUC) are based on psiCheck1 (Promega) with either three wild-type PRE or mutant PRE elements inserted into the XhoI and NotI sites in the 3Ј UTR. The PRE sequence is as follows, with the PRE underlined: 5Ј-TTGTTGTCGAAAATTGTACATAAGCCAA and the PREmt sequence is: 5Ј-TTGTTGTCGAAAATACA-ACATAAGCCAA. The altered specificity reporter, RnLUC 3xPRE UGG, was constructed with the following sequence: 5Ј-TTGTTGTCGAAAATTGGACATAAGCCAA. RnLUC HSL was created by replacing the cleavage/polyadenylation site from the psiCheck1 3Ј UTR with a histone stem loop (HSL) sequence from the human H1F3 gene. Two or four PRE sequences were inserted upstream of the HSL to create RnLUC 2xPRE HSL and RnLUC 4xPRE HSL. The firefly luciferase (FfLUC) plasmid pGL4.13 (Promega) was used as a control. Cell Culture and Transfections-Human HEK293 cells were cultured at 37°C under 5% CO 2 in DMEM with glucose and 1ϫ penicillin/streptomycin/glutamine and 10% FBS (Invitrogen). Drosophila D.mel-2 cells (Invitrogen) were cultured as previously described (63). Transfections of human cells were carried out with FuGENE HD (Promega) at 3:1 volume of lipid:g of DNA. For luciferase assays, 2 ϫ 10 4 cells were plated in whitewalled 96-well plates and, after 24 h, were transfected with 100 ng/well of plasmid DNA. For RNA purifications and coimmunoprecipitations, 6 ϫ 10 5 cells were transfected with 3 g of plasmid DNA 24 h after seeding. D.mel-2 cells were transfected with Effectene (Qiagen) as previously described (63). For human PUM1 expression and repression assays, 400 ng of PUM1 expression vector was included in the transfection with reporters. Luciferase Assays-Renilla (75 ng) and firefly (25 ng) reporters were transfected into HEK293 cells. Forty-eight hours later, luciferase activity was measured with Dual-Glo reagent using a Glomax Multiϩ luminometer (Promega). Relative light unit values were used to calculate a relative response ratio (RRR) by dividing the Renilla value from each well by the corresponding firefly value. Percent repression was then calculated as: (%) ϭ 100 ϫ (1 Ϫ RRR variable /RRR control ), where RRR control equals RRR RnLUC 3xPRE mt or RnLUC. A minimum of three replicates were used to calculate mean values and mean Ϯ S.E. All results were verified in multiple independent experiments. Dual luciferase assays from Drosophila cells were performed as previously described (63). RNAi of Pop2 and Ccr4 was confirmed by measuring depletion of Halotag-deadenylase fusions. D.mel-2 cells were transfected with 100 ng of pIZ HT-Pop2 or pIZ HT-Ccr4 with 100 ng of control pIZ HT. 1 ml of cell suspension was harvested and lysed for 1 h on ice in TNEM with 150 mM NaCl. HT was labeled with fluorescent Halotag ligand, TMR (Promega), for 30 min. Lysates were analyzed by SDS-PAGE and fluorescence detected with a Typhoon Trio fluorescence imager (GE Healthcare). Depletion was calculated relative to samples treated with LacZ control dsRNA and normalized to HT internal control. Coimmunoprecipitations-Plasmids expressing FLAGtagged human PUM1 and PUM2 were transfected into 6 ϫ 10 5 HEK293 cells with HT fusions of CNOT6, CNOT6L, CNOT7, or CNOT8. Cells were lysed in TNEM with 150 mM NaCl and protease inhibitors. HT fusions were labeled with TMR ligand and treated with 10 units of RNase ONE and 4 g of RNase A (Promega). Extracts were then bound overnight with end-overend rotation at 4°C to pre-equilibrated anti-FLAG beads (Sigma). Beads were washed twice with TNEM with 250 mM NaCl and once with TNEM with 500 mM NaCl. Bound protein complexes were eluted with FLAG peptide (Sigma) at 4°C and passed over Micro Bio-Spin columns (Bio-Rad) to collect eluates. Eluates were analyzed by SDS-PAGE and fluorescence emission at 580 nm on a Typhoon Trio to detect TMR-labeled Halotag fusions. Western blots were performed and probed with a monoclonal anti-FLAG M2 antibody (Sigma). Purification of Recombinant Proteins-To purify PUM1 (aa 828 -1176) and PUM2 (aa 705-1050) RNA binding domains and control CNOT6, pFN18A-based plasmids (Promega) encoding Halotag fusions of each protein were introduced into KRX Escherichia coli strain (Promega) and induced with 0.1% (w/v) rhamnose for 12 h at 20°C. Proteins were purified using Halolink resin (Promega). Beads were washed extensively with TNEM and 1000 mM NaCl and then equilibrated in TNEM with 250 mM NaCl. To confirm purification of the respective proteins, AcTEV protease (Invitrogen) was used to cleave CNOT6, PUM1 RBD, and PUM2 RBD from an aliquot of the Halolink beads. The eluted proteins were analyzed by Coomassiestained SDS-PAGE (Fig. 4B). The remaining Halolink bound proteins were used for Halotag pulldown assays. pMAL plasmids (New England Biolabs) encoding maltose-binding protein (MBP)-tagged CNOT6, CNOT7, and CNOT8 were transformed into the BL21 Gold E. coli strain and induced with 0.3 mM isopropyl 1-thio-␤-D-galactopyranoside for 16 h at 20°C. Proteins were purified with the amylose affinity resin (New England Biolabs). Beads were washed three times with TNEM and 1000 mM NaCl and 1 mM DTT and three times with deadenylation buffer (50 mM Tris, pH 8.0, 1 mM MgCl 2 , 50 mM NaCl, 20% glycerol, and 1 mM DTT). Proteins were eluted with 10 mM maltose in deadenylation buffer. In Vitro Deadenylation Assays-Deadenylase activity of purified wild-type or mutant CNOT7 and CNOT8 enzymes was confirmed by incubating 1 M of each enzyme with 200 fmol of a 36-nucleotide RNA substrate with a 5Ј Cy5 fluorescent label and, on the 3Ј end, a 10-nucleotide poly(A) tail (see PRE RNA sequence below) in 20 l of deadenylation buffer (64). Control reactions contained 10 mM EDTA to chelate Mg 2ϩ . Reactions were incubated at 30°C for up to 120 min. An equal volume of 98% formamide and 20 mM EDTA was added, the samples were heated to 95°C for 5 min and then resolved on a 10% polyacrylamide, 7 M urea gel. Products were detected using a Typhoon fluorescence imager. In Vitro Binding of PUMs and CCR4-NOT Deadenylase Subunits-Recombinant prey proteins included MBP-CNOT7, MBP-CNOT8, or control MBP. For Halotag pulldown assays, 50 nM prey protein was added to 50 l of TNEM with 250 mM NaCl and 10 l of Halolink beads bound with HT-CNOT6, HT-CNOT7, or HT-CNOT8 bait proteins (1 g each). Halolink beads alone served as a negative control. Binding reactions were incubated with rotation for 2 h at 4°C. Beads were washed 4 times with 1 ml of TNEM containing 500 mM NaCl and 0.5% Tween 20. Beads were collected by centrifugation at 1000 ϫ g for 5 min. Bound proteins were eluted in 20 l of SDS-PAGE loading dye by heating at 95°C for 5 min. Fifty percent of eluted proteins were then analyzed by SDS-PAGE and Western blotting using anti-MBP monoclonal antibody conjugated to horseradish peroxidase (New England Biolabs). Gel Shift Assays-PRE RNA ligand, 5Ј-TTGTTGTCGAAA-ATTGTACATAAGCCAAAAAAAAAA, was labeled with Cy5 (Dharmacon). PRE mt RNA ligand, 5Ј-TTGTTGTCGAA-AATACAACATAAGCCAAAAAAAAAA, was labeled with Dylight 650 (Dharmacon). RNA ligands were synthesized, deprotected, and PAGE purified prior to gel shift assays. PUM1 RBD or PUM2 RBD were allowed to bind to 200 fmol (10 nM) of RNA ligand in deadenylation buffer for 30 min at 37°C. Samples were then analyzed on a 6% polyacrylamide gel with 1ϫ TB running buffer at 300 volts at 4°C. Gels were imaged with a Typhoon Trio. Purification of PUMs and Mass Spectrometry-Halotag, HT-PUM1, or HT-PUM2, expressed from plasmid pFN21A, were purified using the Halotag Mammalian Pulldown system (Promega). T150 flasks were transfected with each plasmid and after 48 h, cells were washed with phosphate-buffered saline (PBS), and harvested at 2000 ϫ g at 4°C. Cells were suspended in 1 ml of Mammalian Lysis Buffer with Protease Inhibitor Mixture (Promega). Cells were passed through a 25-gauge needle 5 times, incubated for 5 min at 4°C, and then centrifuged for 5 min at 14,000 rpm. Halolink beads were then diluted with TBS (100 mM Tris-HCl, pH 7.5, and 150 mM NaCl) and incubated with the cell extract for 15 min at room temperature with rotation. Beads were washed three times with 10 ml of TNEM with 250 mM NaCl, followed by three washes with the same buffer lacking IGEPAL. Proteins were eluted with 10 units of AcTEV protease (Invitrogen) in 20 mM Tris-HCl, pH 8.0, and 300 mM NaCl. Peptides were prepared from each sample as follows. First, disulfide bonds were reduced with 2 mM DTT at 37°C for 30 min and blocked with 4 mM iodoacetamide at 23°C for 30 min in the dark. The blocking reaction was quenched by bringing the final concentration of DTT to 4 mM. Next, sequencing grade trypsin (Promega) at a 1:50 (mass:mass) enzyme to sample ratio was added and incubated overnight at 37°C. Peptides were then analyzed using nanoflow liquid chromatography (Waters) coupled to an ETD-enabled hybrid linear ion traporbitrap mass spectrometer (Thermo Scientific) via electrospray (65). Separation and data-dependent sampling conditions were used as previously described (66,67). Post-acquisition data processing was performed using a DTA generator and the COMPASS software suite as previously described (68). Protein identifications were assigned by searching the human International Protein Index database with the peptide mass spectra from two independent analyses using the open mass spectrometry search algorithm (OMSSA) (67,69). A false discovery rate threshold of 1% was applied to filter false positive identifications (67,70). To eliminate contaminants that bind Halolink resin or Halotag, an identical analysis was performed on control Halotag purifications. All proteins detected in both the control and PUM complexes were excluded. RNA Purifications and cDNA Preparation-RNA was purified from HEK293 cells harvested 48 h after transfection using the Maxwell 16 simplyRNA LEV cells kit and a Maxwell 16 instrument (Promega). RNA was eluted in 50 l of nucleasefree water and treated with Turbo DNase (Ambion). For first strand cDNA synthesis, RNA (1000 ng) was annealed with random hexamers (500 ng) (IDT) at 70°C for 5 min and cooled on ice. Reverse transcription was performed in reaction buffer with 3 mM MgCl 2 , 500 mM each dNTP, 0.5 l of RNasin Plus, and 1 l of GoScript reverse transcriptase (Promega). RT was omitted in control samples. Quantitative PCR-Multiplexed quantitative PCR was used to detect Renilla and firefly reporter mRNAs. Reactions were carried out in 25-l reactions with the Plexor 2-step kit (Promega). 5 l of cDNA was combined with 2ϫ Plexor Master Mix (Promega) and 100 nM each of the fluorescent primers (Biosearch Technologies). Reactions were performed in triplicate using a CFX96 Real-time PCR instrument (Bio-Rad). The conditions used were: 1) 95°C for 2 min; 2) 95°C for 5 s; 3) 60°C for 35 s. Steps 2 and 3 were repeated a total of 40 cycles. Each reaction was subjected to thermal melting and curves gave single peaks with the expected melting temperature. Amplification efficiencies for each primer set were optimized at 100% efficiency. Cycle thresholds (C t ) were measured using CFX Manager software (Bio-Rad) and imported in Plexor Analysis Software (Promega). Data were analyzed by the comparative C t method (71,72). C t values were measured and normalized to an internal control firefly luciferase mRNA where ⌬C t ϭ C t,Renilla Ϫ C t,firefly . Differences in mRNA levels were calculated using the ⌬⌬C t method whereby ⌬⌬C t ϭ ⌬C t,target Ϫ ⌬C t,control . "Control" indicates RnLUC lacking PREs and "target" indicates RnLUC 3xPRE. Changes in mRNA expression are represented as fold-change values, where fold change ϭ 2 Ϫ⌬⌬Ct . From foldchange we calculated percent repression, which equals 100 ϫ (1 Ϫ fold-change). Poly(A) Selection and Northern Blot Analysis-Total RNA samples from HEK293 cells expressing RnLUC, RnLUC 3xPRE, RnLUC 3xPREmt, and FfLUC control were purified and then polyadenylated mRNA was selected from 20 g of total RNA by the PolyAtract mRNA Isolation System (Promega). RnLUC HSL and FfLUC RNAs were reverse transcribed and amplified by quantitative PCR as described above. Reverse primers contained the T7 promoter sequence. Riboprobes were transcribed with [␣-32 P]UTP using T7 MaxiScript kit (Invitrogen) and purified by Sephadex G-25 columns. Blots were hybridized with probes overnight at 68°C with rotation. Blots were washed twice for 15 min with 2ϫ SSC (300 mM NaCl, 30 mM sodium citrate) ϩ 0.1% SDS and twice for 30 min with 0.1% SSC (15 mM NaCl, 1.5 mM sodium citrate) ϩ 0.1% SDS. Blots were exposed to a phosphorimager screen and visualized with a Typhoon Trio. RESULTS Human PUM1 and PUM2 Reduce Protein Expression and mRNA Levels-To study regulation by PUMs, we developed a luciferase reporter assay that recapitulates sequence-specific repression. Three binding sites for PUM1 and PUM2, designated PUM response elements (PRE), were inserted into a minimal 3Ј UTR of an mRNA encoding Renilla luciferase (RnLUC 3xPRE, Fig. 1A). This PRE sequence UGUACAUA is a high affinity binding site for PUM1 and PUM2 (7). As a control for specificity, the UGU sequence of the PRE, which is crucial for PUM binding, was mutated to ACA to disrupt PUM binding (Fig. 1A, RnLUC 3xPREmt). Electrophoretic mobility shift assays confirm that PUM1 and PUM2 bind to the PRE with nearly equivalent affinity (Fig. 1B). Importantly, neither PUM bound the PREmt (Fig. 1B). As an additional control, a Renilla luciferase reporter lacking PRE sequences was tested (Fig. 1A, RnLUC). Each reporter was transfected into the human HEK293 cell line. As an internal control, a plasmid encoding firefly luciferase was cotransfected (Fig. 1A, FfLUC). Expression of each luciferase was subsequently measured (Fig. 1, C and D). Renilla expression from RnLUC 3xPRE was substantially repressed rel-ative to RnLUC 3xPREmt or RnLUC (Fig. 1C). To normalize variations in transfection efficiency, the Renilla activity for each sample was divided by the corresponding firefly luciferase activity (Fig. 1D). From these values, we calculated a percent repression value, as a measure of PUMs repressive activity (Fig. 1E). The presence of the PRE elements in RnLUC 3xPRE elicited 71% repression relative to control reporters (Fig. 1E), indicating potent, specific repression by endogenous PUM1 and/or PUM2. Having established that PRE-dependent repression reduces protein output, we wished to determine whether the effect is manifested by changes in mRNA level; therefore, we purified RNA and performed multiplexed quantitative reverse transcription-polymerase chain reaction (qRT-PCR) to measure levels of reporter mRNAs (Fig. 1F). RnLUC C t values were normalized to the internal control, FfLUC, to yield a ⌬C t value (71,72). From ⌬C t values we calculated fold-change for each sample, relative to negative control RnLUC (71,72). The foldchange of RnLUC 3xPRE mRNA was 0.22, indicating it was reduced by 78% relative to RnLUC mRNA (Fig. 1F, 3xPRE). Consistent with repression by PUMs, mutation of the PREs alleviated regulation (Fig. 1F, 3xPREmt). Northern blotting was then performed using purified mRNA to visualize reporter transcripts (poly(A) affinity purification was necessary for detection). Detection of FfLUC served as an internal control. Quantification of the data revealed that RnLUC 3xPRE mRNA was reduced 74% relative to RnLUC and RnLUC 3xPREmt (Fig. 1G), concordant with qRT-PCR results (Fig. 1F). Together, these findings demonstrated that PUM repression of the PRE bearing reporter substantially reduces protein and mRNA levels, and the reporters provide sensitive sensors for post-transcriptional repression by PUMs. Both PUM1 and PUM2 are expressed in HEK293 cells ( Fig. 2A). To demonstrate that the PRE-dependent repression is caused by endogenous PUM1 and PUM2, each protein was depleted by RNA interference (RNAi). Transfection of nontargeting control siRNAs had no effect on PUM expression ( Fig. 2A, Control). Treatment with siRNAs corresponding to PUM1 or PUM2 efficiently depleted the respective proteins ( Fig. 2A, PUM1 and PUM2). Treatment of cells with both PUM1 and PUM2 siRNAs substantially depleted both PUM1 and PUM2 ( Fig. 2A, PUM1 ϩ PUM2). We then measured the effect of depletion of PUM1, PUM2, or both on reporter expression. The control siRNAs had no effect on repression of RnLUC 3xPRE (Fig. 2B, 65% repression) relative to mock transfection without siRNA (Fig. 2B, None). Likewise, transfection of siRNAs to GAPDH had no effect on repression (Fig. 2B, GAPDH). Depletion of each PUM individually caused a modest loss of repression (Fig. 2B). Depletion of both PUM1 and PUM2 together substantially reduced PUM repression to only 15% (Fig. 2B, PUM1 ϩ PUM2). We conclude that both PUMs repress the PRE-bearing reporter. We also tested the impact of overexpression of PUMs but did not observe enhancement of repression (data not shown), indicating that PUM expression is not limiting. Together, these observations indicate that both PUM1 and PUM2 cause PRE-dependent repression, and that they have overlapping regulatory roles. The results in Figs. 1 and 2 validate the specificity and sensitivity of the PUM repression assay. PUM1 and PUM2 Repress Individually-Having shown that PUMs have overlapping capabilities to repress, we next assessed whether PUM1 and PUM2 individually exhibit repressive activity. To do so, we created a new reporter that responds to exogenously introduced PUM1 or PUM2. First, each PUM was programmed to bind a new PRE sequence (designated PRE UGG) by altering the RNA recognition amino acids of the sixth PUF repeat (R6as) (Fig. 3A) (7, 63). Importantly, wild-type PUMs do not bind UGG efficiently (7,11). A corresponding reporter, RnLUC 3xPRE UGG, was created by changing the nucleobase at position 3 of the PRE from uracil to guanine (Fig. 3, A and B). The reporters were then transfected into cells and regulation by endogenous PUMs or by PUM1 with altered specificity (PUM1 R6as) was measured. PUM1 R6as, fused to Halotag, was expressed from a transfected plasmid. As a control, a plasmid expressing only Halotag protein was introduced. As observed in Fig. 1, endogenous PUMs repressed the RnLUC 3xPRE but, importantly, did not affect RnLUC 3xPRE UGG, nor the negative controls RnLUC or RnLUC 3xPREmt (Fig. 3C, Halotag). Expression of PUM1 R6as specifically repressed the RnLUC 3xPRE UGG reporter by 64% (Fig. 3C). PUM1 R6as did not change repression of RnLUC 3xPRE by endogenous PUMs, nor did it regulate RnLUC or RnLUC 3xPREmt (Fig. 3C). Next, we compared the repressive activity of PUM1 or PUM2 using the RnLUC 3xPRE UGG. PUM1 R6as repressed the reporter by 75% and PUM2 R6as was repressed by 69% (Fig. 3D), relative to the Halotag control. We conclude that PUM1 and PUM2 can independently repress mRNAs, and that PUMs can be programmed to specifically repress new target mRNAs. PUM1 and PUM2 Interact with CCR4-NOT Deadenylase Complex Subunits-We hypothesized that PUM1 and PUM2 may recruit co-repressor proteins to mediate repression. PUM complexes had not been previously biochemically analyzed. To identify co-repressors, we purified PUM1 and PUM2 complexes and identified associated proteins. First, PUMs were expressed in HEK293 cells as fusions to Halotag and affinity purified. Purified complexes were eluted and tryptic digests were then analyzed by nanoflow reversed-phase liquid chromatography and electrospray ionization using a hybrid linear ion trap-orbitrap mass spectrometer. Peptide sequences and protein identifications were assigned by use of high accuracy mass spectral data (Ͻ10 ppm mass measurement) with a 1% false discovery rate cut off (67,70). To eliminate false-positives, an identical analysis was performed on control Halotag purifications; proteins detected in both the control and PUM complexes were excluded as contaminants. As a result of this analysis, multiple subunits of the CCR4-NOT (CNOT) deadenylase complex (47,61) were detected in purified PUM complexes including CNOT1, CNOT2, CNOT4, and CNOT10 (data not shown). Association of CNOT subunits with PUMs prompted us to investigate interaction of deadenylase enzyme subunits with PUMs. The CNOT complex interacts with heterodimers formed by pairing CNOT6 or CNOT6L with CNOT7 or CNOT8 deadenylases (47, 60 -62). To analyze association of PUMs with these enzymes, FLAG-tagged PUM1 and PUM2 were expressed in cells that co-expressed Halotag fusion proteins of CNOT7, CNOT8, CNOT6, or CNOT6L. Cell extracts were prepared and treated with RNase One and RNase A to destroy RNA. Halotag fusions were fluorescently labeled with with FLAG-tagged PUM1 and PUM2 from RNase-treated extracts (Input). As a negative control (Control), mock immunoprecipitations were performed with anti-FLAG beads from samples expressing Halotag (HT) protein and Halotag deadenylase fusion proteins. Proteins were detected in input extracts or purified FLAG eluates by fluorescence labeling with the Halotag ligand TMR or by anti-FLAG Western blot. B, Coomassie staining of recombinant, purified bait proteins: CNOT6, PUM1, and PUM2. PUMs were active for RNA binding (Fig. 1B). C, in vitro deadenylation assay using wild-type CNOT7 and CNOT8 or mutant CNOT7 mt and CNOT8 mt with Cy5-labeled RNA substrate with a 10-nucleotide poly(A) tail (Cy5-RNApA 10 ) or, as a marker, substrate lacking a tail (Cy5-RNApA 0 ). EDTA was added as a negative control to chelate Mg 2ϩ and thus inhibit deadenylation. D, Western blot (anti-MBP) of in vitro binding of recombinant, purified PUM1 and PUM2 to CNOT7 and CNOT8. Halolink bound PUM1 and PUM2 were incubated with MBP fusions of CNOT7 or CNOT8. As a positive control, CNOT7 and CNOT8 interacted with CNOT6. Halolink beads alone (Control) and MBP served as negative controls. OCTOBER 19, 2012 • VOLUME 287 • NUMBER 43 JOURNAL OF BIOLOGICAL CHEMISTRY 36377 TMR fluor and detected in the cell lysates (Fig. 4A, Input). Next, PUM complexes were immunopurified using anti-FLAG monoclonal antibody and specifically eluted with FLAG peptide. Purification of PUM1 and PUM2 was confirmed by Western blot of the eluates (Fig. 4A). CNOT6, CNOT6L, and CNOT8 were strongly detected in both PUM1 and PUM2 eluates, whereas CNOT7 was weakly detected (Fig. 4A). The interactions were specific, because none of the deadenylases or the Halotag control protein associated with the anti-FLAG resin (Fig. 4A, Control). This data demonstrates that PUMs associate with CNOT deadenylase complexes. Because the PUM-deadenylase association was detected in RNase-treated extracts, protein interactions likely mediate the contacts and not RNA. PUM1 and PUM2 Bind the CNOT7 and CNOT8 Deadenylases in Vitro-To further investigate the interaction of PUMs with deadenylase enzymes, we performed in vitro protein interaction assays. As bait proteins, recombinant Halotag fusions of PUM1 and PUM2 were purified and immobilized to halolink beads (Fig. 4B). These proteins were active in RNA binding assays (Fig. 1B). As a positive control, a Halotag fusion of CNOT6 was also purified. Recombinant CNOT7 and CNOT8, fused to MBP were then purified and used as prey proteins. First, the enzymatic activity of each deadenylase was demonstrated by deadenylating a 5Ј Cy5 fluorescently labeled RNA substrate with a 10-nucleotide poly(A) tail (Fig. 4C). CNOT7 and CNOT8 progressively deadenylated the substrate over time. As a control, chelation of Mg 2ϩ with EDTA inactivated CNOT7 and CNOT8 (Fig. 4C, EDTA). Furthermore, mutation of the magnesium coordinating residues (Asp-40 and Glu-42) within the active site of each enzyme to alanine blocked deadenylation (Fig. 4C). Having demonstrated that CNOT7 and CNOT8 were active, we then measured binding to PUMs. Each prey was added to beads bound with CNOT6, PUM1, PUM2, or negative control beads. None of the prey proteins bound to control beads (Fig. 4D). The positive control, CNOT6, bound both CNOT7 and CNOT8, as expected (47), but not MBP (Fig. 4D). PUM1 and PUM2 bound to both CNOT7 and CNOT8, but not the control MBP (Fig. 4D). Therefore, human PUMs specifically interact with POP2 orthologs in vitro. Together with the results from co-immunoprecipitation studies (Fig. 4A), we conclude that PUMs bind either CNOT7 or CNOT8. We speculate that the preference for CNOT8 observed in Fig. 4A could result from additional factors in vivo that might modulate the interaction or differences in relative affinity. Because CNOT6 and CNOT6L bind CNOT7 or CNOT8, their co-purification with PUMs is likely the result of heterodimerization. Deadenylation Inhibitors Alleviate PUM Repression-The observation that PUMs bind deadenylases suggests that deadenylation may be required for PUM-mediated repression. To address this hypothesis, we used the observation that mutations in the catalytic residues of deadenylases render them inactive (Fig. 4C), and when overexpressed in cells, these mutants block deadenylation in a dominant-negative manner (74 -77). Therefore, we expressed mutant CNOT8 (CNOT8 mt) in which magnesium ion coordinating residues Asp-40 and Glu-42 were changed to alanine. The impact of these mutant deadenylases on PUM repression of RnLUC 3xPRE reporter was then mea-sured. CNOT8 mt expression plasmid was transfected over a range from 0 to 85 ng (Fig. 5A). The CNOT8 mt protein was fused to Halotag to facilitate detection (Fig. 5B). A reciprocal gradient of the plasmid expressing only Halotag was used to balance transfections and Halotag alone served as a negative control. When Halotag alone was expressed (Fig. 5A, 0 ng CNOT8 mt), PUMs repressed the RnLUC 3xPRE by 77% relative to RnLUC, consistent with earlier observations (Fig. 1). Transfection of 20, 50, and 85 ng of the CNOT8 mt plasmid reduced PUM repression in a dose-dependent manner to 58, 51, and 40%, respectively (Fig. 5A). The effect of CNOT8 mt was specific to PUM repression; neither RnLUC nor RnLUC 3xPREmt reporter was affected (Fig. 5A). Dose-dependent expression of HT and HT-CNOT8 mt in these samples was confirmed by fluorescence detection (Fig. 5B). We conclude that CNOT8 mt has a dominant-negative effect that inhibits repression by PUMs. We next tested the ability of a catalytically inactivated mutant CNOT7 to affect PUM repression by the same strategy. Transfection of 20, 50, and 85 ng of CNOT7 mt expression plasmid reduced PUM repression from 78 to 56, 50, and 48%, respectively (Fig. 5, C and D). Again, the effect was specific to the 3xPRE bearing reporter; RnLUC and RnLUC 3xPREmt reporters were not significantly affected. Together these results demonstrate that dominant-negative mutant deadenylases block PUM repression, indicating that deadenylation plays an important role in PUM repression. Depletion of Deadenylases Inhibits PUM Repression-To corroborate the results above, we attempted to measure the impact of depletion of human deadenylases on PUM repression. Although we tested multiple siRNAs for each deadenylase, we were unable to substantially deplete CNOT7/8 and CNOT6/ 6L. Instead, we employed Drosophila D.mel-2 cells, which offer three advantages: 1) RNA interference elicited by dsRNA is highly efficient in these cells; 2) Drosophila possess one copy each of POP2 (i.e. CAF1) and CCR4 (i.e. TWIN), thus circumventing the potential redundancy of deadenylases in human cells (47,78,79); and 3) human PUMs actively repress in D.mel-2 cells (see below). We first confirmed the efficacy of RNAi-mediated knockdown of deadenylases. To measure depletion of each protein, Halotag fusions of POP2 or CCR4 were co-expressed with a Halotag internal control. Cells were then treated with dsRNAs corresponding to either POP2 or CCR4 and, after 48 h, levels of the Halotag fusion proteins were measured. POP2 and CCR4 were depleted by 99 and 94%, respectively (Fig. 6A), demonstrating efficient RNAi knockdown. We then tested the ability of human PUM1 to repress RnLUC 3xPRE in D.mel-2 cells. PUM1 repressed reporter protein expression by 45% relative to the empty expression vector (Fig. 6B). Simultaneous depletion of CCR4 and POP2 reduced repression to 28% (Fig. 6B). This effect was reflected at the mRNA level: PUM1 reduced RnLUC 3xPRE mRNA by 44% (Fig. 6C) and depletion of CCR4 and POP2 alleviated PUM1mediated reduction of the mRNA to 19% (Fig. 6C). Pop2 and Ccr4 mRNAs were depleted from these samples by 82 and 94%, respectively, as ascertained by qRT-PCR. Therefore, the POP2 and CCR4 deadenylases are necessary for efficient repression by PUM1. We sought to determine whether the reduction in PUM1 repression was due to depletion of CCR4, POP2, or both. RNAi knockdown of CCR4 did not reduce PUM1 repression (Fig. 6D, CCR4, 58% repression), whereas PUM1 repression was significantly abrogated by knockdown of POP2 (Fig. 6D, POP2, 39% repression). This result may reflect the fact that POP2 is the predominant deadenylase in Drosophila (78,79). This finding supports the conclusion that deadenylases are important for RNAi depletion of endogenous CCR4 and POP2 reduced repression to 28%. C, RnLUC 3xPRE mRNA levels were measured from samples in panel B using multiplexed qRT-PCR to determine the fold-change in mRNA levels relative to empty expression vector, pIZ. PUM1 reduced mRNA levels 44% on the control sample versus only 19% when deadenylases were depleted by RNAi. D, RNAi depletion of endogenous POP2 inhibits PUM1 repression, whereas knockdown of endogenous CCR4 does not. Nontargeting double-stranded RNA corresponding to the bacterial LacZ gene served as a negative control. Statistical significance is indicated with *, representing p Ͻ 0.0001 by a two-tailed, unpaired t test. PUM repression, and that PUM1 can repress by recruiting the deadenylase complex via a conserved interaction with Pop2p orthologs. PUMs Also Repress by a Poly(A) Independent Mechanism-We next asked is the poly(A) tail, and therefore deadenylation, absolutely necessary for repression by PUMs? Replication-dependent histone mRNAs lack a poly(A) tail; rather, their 3Ј ends are formed by cleavage after a HSL structure (80). Translation of histone mRNAs is promoted by the 5Ј cap and HSL, and degradation occurs via the 5Ј decapping pathway (80). Consequently, the HSL provides a means of examining PUM repression in the absence of a poly(A) tail. We removed the cleavage/ polyadenylation elements from the Renilla luciferase reporter and, in its place, inserted sequences encoding the HSL to drive 3Ј end formation of the RnLUC HSL reporter (Fig. 7A). To verify that the RnLUC HSL lacked a poly(A) tail, this mRNA was expressed in cells. As a positive control, the polyadenylated RnLUC reporter was separately expressed. As an internal control, both samples also expressed the polyadenylated FfLUC mRNA. Total RNA was purified from each sample and the mRNAs were then purified using oligo(dT) magnetic beads to enrich poly(A) mRNA. Using qRT-PCR, each mRNA was detected in the poly(A)-selected fraction and normalized to the total amount. The poly(A)-selected RNA contained less than 6% of RnLUC HSL mRNA, whereas 100% of the control RnLUC mRNA was poly(A) selected (Fig. 7B). As expected, the FfLUC internal control was highly enriched in the poly(A) fraction (80 -100%). These results confirm that at least 94% of the RnLUC HSL mRNA is not polyadenylated. To measure PUM repression, PREs were inserted into the 3Ј UTR to create RnLUC 2xPRE HSL and 4xPRE HSL (Fig. 7A). The two and four PREs conferred 22 and 57% repression, respectively (Fig. 7C). To determine whether repression of HSL reporters by PUMs affected their mRNA level, we measured the levels of each mRNA by qRT-PCR. PUM repression did not reduce the RnLUC 2xPRE HSL reporter mRNA and, in fact, the 4xPRE HSL mRNAs was more abundant than RnLUC HSL mRNA (Fig. 7D). This indicates that PUM repression of the HSL reporters may occur at the translational level, rather than by direct activation of mRNA degradation pathways. From these data we conclude that PUMs can repress mRNAs lacking poly(A) tails and, therefore, can also repress by a deadenylation independent mechanism. DISCUSSION Our results demonstrate that both human PUM1 and PUM2 are potent repressors that reduce levels of target mRNAs and cause a corresponding decrease in protein expression (Fig. 1). Endogenous PUMs have overlapping function and act redundantly to repress protein expression (Fig. 2). We show that human PUM1 and PUM2 repress autonomously and can be programmed to regulate new mRNAs, which offers potential therapeutic value for developing designer PUMs to reduce expression of deleterious genes (Fig. 3) (81, 82). Furthermore, our results identify FIGURE 7. Poly(A) independent repression by PUMs. A, RnLUC reporters that lack a 3Ј poly(A) tail were created by replacing the cleavage/polyadenylation sites with a HSL processing signal. Two or four PREs were inserted into the 3Ј UTR to created RnLUC 2xPRE HSL and RnLUC 4xPRE HSL, respectively. B, graph of fold-enrichment of RnLUC HSL, RnLUC, and FfLUC internal control mRNAs in poly(A) selected fraction isolated using oligo(dT) affinity purification. Fold-enrichment was measured by qRT-PCR analysis of poly(A) selected mRNA, normalized to total, and calculated relative to polyadenylated RnLUC. C, graph of percent repression relative to RnLUC HSL for the indicated reporters showing that endogenous PUMs repress 2x and 4xPRE HSL reporters. D, graph of fold-change in reporter mRNA levels measured by multiplexed qRT-PCR and calculated relative to RnLUC HSL control. two modes of repression: deadenylation-mediated repression and a deadenylation-independent mechanism. Our data provide the first evidence that human PUMs use deadenylase enzymes as co-repressors. PUMs physically associate with CNOT deadenylase subunits, including the four known deadenylase enzymes, CNOT6, -6L, -7, and -8 (Fig. 4), mediated by direct binding to human Pop2 orthologs, CNOT7 and CNOT8 (Fig. 4). The association of CNOT6 and -6L with PUMs is likely bridged via CNOT7 and CNOT8. Thus, we propose that PUMs recruit multiple deadenylase complexes to efficiently repress target mRNAs. The regulatory role of deadenylases in PUM repression is supported by the ability of dominant-negative CNOT7 and CNOT8 mutants to inhibit PUM repression (Fig. 5). Importantly, these dominant-negative mutants were previously shown to inhibit deadenylation when expressed in vivo (74 -77, 79). Further support is provided by data showing that depletion of deadenylase enzymes reduces the magnitude of PUM repression (Fig. 6). Therefore, we conclude that deadenylation is necessary to achieve robust repression. Removal of the poly(A) tail through concerted action of PUMs and deadenylases is anticipated to reduce translation efficiency and, at the same time, initiate degradation of the mRNA by either 5Ј decapping mediated decay, 3Ј decay by the exosome, or both pathways (83). This model is supported by our observation that protein and mRNA levels are concomitantly reduced by PUM repression. In accordance with this model, a previous study concluded that PUM1 promoted degradation of target mRNAs (33). It is noteworthy that we were unable to detect partially or fully deadenylated mRNAs, likely because these intermediates are unstable and low abundance. Although deadenylases are important for PUM repression, several observations provide evidence for a second poly(A)-independent repression mechanism. First, dominant-negative CNOT7/8 mutants do not completely block repression in vivo (Fig. 5). Second, depletion of deadenylases does not fully alleviate Pum repression (Fig. 6). The third, more telling finding is that PUMs repress target mRNAs with a 3Ј HSL, indicating that a poly(A) tail, and consequently deadenylation, is not absolutely essential. Taken together, these data support an additional deadenylation-independent repression mechanism. Deadenylation-dependent and -independent mechanisms may function together to achieve maximal regulation. Indeed, the magnitude of repression of HSL mRNAs was less than that observed with polyadenylated reporter. Our results are reminiscent of a study that analyzed repression by the Drosophila PUF protein, PUMILIO, wherein embryos were injected with reporter mRNAs either bearing a poly(A) tail or lacking a tail. PUMILIO repressed the poly(A) mRNA most efficiently and, to a lesser degree, the tail-less RNA (40). Other mRNA regulators have also been reported to repress by deadenylationindependent mechanisms. For instance, artificial tethering of the miRNA effector protein GW182 or the CNOT complex can inhibit HSL reporters (87,88), suggesting that PUM recruitment of CNOT might cause translation repression independent of deadenylation. How do PUMs cause deadenylation-independent PUM repression? In addition to deadenylation, PUMs could activate another mRNA decay step, such as decapping; although the observation that the PRE containing HSL target mRNA were not degraded argues against this hypothesis (Fig. 7). Alternatively, PUMs might interfere with translation, supported by work in model organisms indicating that PUFs can inhibit translation (48,51,73). Germane to this idea, PUFs were recently reported to bind to a translation elongation factor (51). Furthermore, we recently characterized conserved repression domains in the N terminus of Drosophila and human PUMs that may elicit deadenylation independent repression (63). Future investigations will evaluate these possible mechanisms.
2018-04-03T00:56:45.888Z
2012-09-06T00:00:00.000
{ "year": 2012, "sha1": "7aa8f1a5f65f48733b9f9d2d026f86eb0fd00b26", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/287/43/36370.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "ecf1e8809dfe9ae5255fe523e1c42c2abf168f20", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269760769
pes2o/s2orc
v3-fos-license
Impact of treatment interval between neoadjuvant immunochemotherapy and surgery in lung squamous cell carcinoma Objective The optimal timing for surgery following neoadjuvant immunochemotherapy for lung squamous cell carcinoma appears to be a topic of limited data. Many clinical studies lack stringent guidelines regarding this timing. The objective of this study is to explore the effect of the interval between neoadjuvant immunochemotherapy and surgery on survival outcomes in patients with lung squamous cell carcinoma. Methods This study conducted a retrospective analysis of patients with lung squamous cell carcinoma who underwent neoadjuvant immunochemotherapy between January 2019 and October 2022 at The First Affiliated Hospital, Zhejiang University School of Medicine. Patients were divided into two groups based on the treatment interval: ≤33 days and > 33 days. The primary observational endpoints of the study were Disease-Free Survival (DFS) and Overall Survival (OS). Secondary observational endpoints included Objective response rate (ORR), Major Pathological Response (MPR), and Pathological Complete Remission (pCR). Results Using the Kaplan-Meier methods, the ≤ 33d group demonstrated a superior DFS curve compared to the > 33d group (p = 0.0015). The median DFS for the two groups was 952 days and 590 days, respectively. There was no statistical difference in the OS curves between the groups (p = 0.66), and the median OS was not reached for either group. The treatment interval did not influence the pathologic response of the tumor or lymph nodes. Conclusions The study observed that shorter treatment intervals were associated with improved DFS, without influencing OS, pathologic response, or surgical safety. Patients should avoid having a prolonged treatment interval between neoadjuvant immunochemotherapy and surgery. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-024-12333-3. Introduction Approximately 23% of non-small cell lung cancers are characterized as lung squamous cell carcinoma(LUSC) [1].Survival rates for LUSC remain suboptimal, leading to unsatisfactory clinical outcomes.LUSC is known to be highly immunogenic [2].The use of preoperative programmed cell death protein 1 (PD-1) or its ligand PD-L1, either as monotherapy or in combination with chemotherapy, has been associated with improved outcomes in LUSC [3].Nonetheless, numerous questions remain concerning the application and efficacy of immunochemotherapy. Liu et al. [4]demonstrated that a short interval (4-5 days) between the initiation of neoadjuvant immunotherapy and resection of the primary tumor is crucial for achieving optimal therapeutic efficacy.Prolonging the duration (10 days) or shortening it (2 days) eliminated the effectiveness of immunotherapy.The finding suggests that the treatment interval can significantly influence therapeutic efficacy.The optimal timing for surgery following neoadjuvant immunochemotherapy often seems overlooked.Many clinical studies lack a strict definition regarding this interval, with durations reported ranging from 21 to 49 days post the last neoadjuvant treatment [5][6][7][8][9][10]. Consequently, this study aims to examine the influence of the treatment interval between neoadjuvant immunochemotherapy and surgery on the prognosis of patients diagnosed with LUSC. Methods This study retrospectively analyzed patients with stage IB-IIIB (T3N2, T4N2) LUSC who underwent neoadjuvant immunochemotherapy at The First Affiliated Hospital, Zhejiang University School of Medicine between January 2019 and October 2022.All the patients received between 2 and 4 cycles of neoadjuvant immunotherapy in combination with platinum-based doublet chemotherapy (comprising a platinum agent and paclitaxel) before surgery.The most recent follow-up for this study took place in July 2023. We collected patients' basic information, tumor response to neoadjuvant treatment, adverse events related to neoadjuvant therapy, extent of tumor regression, survival status, and other data through the hospital's electronic medical record system and telephone follow-up.Preoperative and postoperative staging was conducted in accordance with the 8th edition of the American Joint Committee on Cancer (AJCC) and Lung Cancer Staging Manual's Tumor, Lymph Node, and Metastasis (TNM) staging system [11].The Charlson Comorbidity Index (CCI) was used to quantify patients' comorbidities [12].Charlson also proposed a CCI scoring standard that includes age weight [13].After adding the score for comorbidities, the age-adjusted CCI (aCCI) score is obtained.Based on the range of the aCCI score, the severity of comorbidities is divided into three levels: none/mild comorbidities (aCCI score of 0-1), moderate comorbidities (aCCI score of 2-3), and severe comorbidities (aCCI score ≥ 4).Adverse events related to neoadjuvant treatment were evaluated based on the National Cancer Institute Common Terminology Criteria for Adverse Events (NCI-CTCAE) version 5.0 [14].We evaluated the extent of tumor response to neoadjuvant treatment using the Response Evaluation Criteria in Solid Tumors (RECIST 1.1) [15], which is a standard criterion for assessing the efficacy of solid tumors.Complete Remission (CR): The complete disappearance of all target lesions, with no residual evidence of disease.Partial Remission (PR): A reduction in the sum of the longest diameters of target lesions by at least 30%.Progression Disease (PD): An increase of at least 20% in the sum of the longest diameters of target lesions or the appearance of new lesions.Stable Disease (SD): A status where changes fall between partial remission and progression [16].The Objective Response Rate (ORR) is calculated as the sum of individuals achieving complete remission and partial remission, divided by the total number of individuals.All patients underwent PET-CT examination before neoadjuvant treatment.All patients underwent EBUS or biopsy before neoadjuvant treatment. The treatment interval is defined as the duration between the last date of drug treatment and the date of surgery.Based on this interval, patients were divided into two groups: the < = 33 days group and the > 33 days group.The primary endpoints of this study were Disease-Free Survival (DFS) and Overall Survival (OS).The secondary endpoints included Objective Response Rate, Major Pathological Response (MPR), and Pathological Complete Remission (pCR).DFS is defined as the duration between the date of surgery and the date of the event occurrence.OS is defined as the duration between the date of the first neoadjuvant treatment and the date of the event occurrence.MPR was defined as 10% or fewer viable tumor cells in the resected primary tumor, and the pCR was defined as the removal of carinal tissues and dissected lymph nodes without any viable tumor [16,17]. Patients meeting the following criteria were included in this study: (1) Age between 18 and 80 years.(2) Diagnosed with potentially resectable lung cancer confirmed by imaging, pathological histology, or cytology.Patients requiring neoadjuvant treatment as per standard diagnostic and therapeutic protocols for lung cancer prior to curative surgery.(3) ECOG performance status score of 0-1.(4) No prior treatment for lung cancer, including surgery, chemotherapy, radiotherapy, targeted therapy, hormone therapy, or immunotherapy. Patients with any of the following conditions were excluded: (1) Lack of essential pre-treatment imaging assessment.(2) Presence of distant organ metastasis. We performed intergroup analysis using t-tests, Mann-Whitney U tests, chi-square tests, or Fisher's exact test.Analysis was conducted using the Cox regression model and logistic regression.We compared DFS and OS between groups using Kaplan-Meier methods and the log-rank test.All statistical tests in this study were twotailed, with significance considered at a P-value < 0.05.All statistical analyses were performed using R software (version 4.2.1). The study was approved by institutional ethics board of The First Affiliated Hospital, Zhejiang University School of Medicine (No. 2023 − 0472) and individual consent for this retrospective analysis was waived. Results This study encompassed a total of 204 participants, with a median treatment interval of 33 days.In the < = 33 days group, there were 108 people, and the median treatment interval was 29 days; in the > 33 days group, there were 96 people, and the median treatment interval was 38 days.The treatment intervals of the two groups showed a bimodal distribution and there was a statistical difference (p = 0).The cohort consisted of 199 males (97.5%) and 5 females (2.5%).Males had a median age of 65 years, whereas for females, the median age was 66 years. There was no statistical difference in the aCCIs scores between the two groups.Moreover, the median initial tumor diameter was consistent at 47 mm for both groups, again showing no statistically significant variance (p = 0.359)(Fig.1).Detailed baseline information can be found in Table 1. Based on the clinical stage, there was no statistically significant difference in the distribution of tumor stages between the two groups (p = 0.507).The majority of patients in both groups were classified as stage IIIA, with 40 (37%) in the < = 33 days group and 29 (30.2%) in the > 33 days group.There were no statistically significant differences in the number of treatment cycles (p = 0.58) or the choice of immunotherapy drugs (p = 0.139) between the two groups.Following neoadjuvant immunochemotherapy, there was no statistically significant difference in the therapeutic evaluation between the two groups (p = 0.742), with 63 (58.3%) individuals achieving PR in the < = 33 days group and 61 (63.5%) in the > 33 days group.The ORR was 58.3% in the < = 33 days group and 63.5% in the > 33 days group, with no statistically significant difference (p = 0.447).Regarding adverse events between the two groups, there was no statistically significant difference in Grade III adverse events, with 10 (9.3%) individuals in the < = 33 days group and 6 (6.3%) in the > 33 days group (p = 0.78).The main reasons for these adverse events included blood cell reduction (11 individuals), liver impairment (3 individuals), skin and mucous membrane reactions (1 individual), and gastrointestinal reactions (1 individual).There was one individual in each group with Grade IV adverse events, accounting for 0.9% and 1% respectively, and both cases were due to granulocyte reduction (2 individuals).Comprehensive data on neoadjuvant immunochemotherapy is detailed in Table 2. The surgical approaches did not show any statistically significant difference between the two groups (p = 0.748).The most common surgical procedure in both groups was pulmonary lobectomy, with 48 (44.4%) in the < = 33 days group and 48 (50%) in the > 33 days group.In the < = 33 days group, 103 (95.4%) individuals achieved R0 resection, while in the > 33 days group, there were 88 (91.7%) individuals.This difference was not statistically significant (p = 0.28).It was observed that in the < = 33 days group, there were more cases where minimally invasive surgeries were converted to open surgeries compared to the > 33 days group, and this difference was statistically significant [27 (25%) individuals vs. 10 (10.4%) individuals, p = 0.007].The number of lymph node dissection did not show any statistically significant difference between the two groups (p = 0.15).The median length of hospital stay was 7 days for both groups, and there was no statistically significant difference (p = 0.509).For further details regarding surgical-related information, please refer to Table 3. Based on yp-stage, there was no statistically significant difference in tumor staging between the two groups (p = 0.337).The majority of patients in both groups were classified as yp-stage IA, with 45 (41.7%) individuals in the < = 33 days group and 34 (35.4%)individuals in the > 33 days group.In terms of achieving Tumor MPR, there were 43 (39.8%)individuals in the < = 33 days group and 42 (43.8%)individuals in the > 33 days group, with no statistically significant difference (p = 0.569).Similarly, in achieving pCR, there were 25(23.1%)individuals in the < = 33 days group and 18(18.8%)individuals in the > 33 days group, again with no statistically significant difference (p = 0.442)(Fig.2).Pathological details can be found in Table 2.The results of the logistic regression univariate analysis analysis indicated that the treatment interval does not impact the pathological response of tumors and lymph nodes (Supplementary Table 1).The 90-day postoperative mortality rates were 0% and 1.04%, respectively, with no statistical difference.A total of 31 people experienced recurrence or metastasis.In the < = 33d group, 11 people (10.2%) were affected, of which 5 people (4.6%) had local recurrence and 6 people (5.6%) had distant metastasis.In the > 33d group, 20 people (20.8%) were affected, of which 12 people (12.5%) had local recurrence and 8 people (8.3%) had distant metastasis. Based on the Kaplan-Meier methods, the < = 33 days group exhibited a better DFS curve compared to the > 33 days group (p = 0.0015) (Fig. 3).The median DFS for the two groups was 952 days and 590 days, respectively.However, there was no statistically significant difference in the OS curves between the two groups (p = 0.66), and the median OS was not reached (Fig. 4). Discussion Our research found that the treatment interval affects DFS in LUSC, with patients who had shorter treatment intervals experiencing better DFS outcomes.It was observed that patients with shorter treatment intervals exhibited a slightly better OS curve in some instances, despite lacking statistical significance.Omarini et al. [18] found that shorter treatment intervals after neoadjuvant chemotherapy correlated with better OS and Recurrence-Free Survival in the patients with breast cancer .This is similar to our results.We will continue to monitor the subsequent survival of the patients in this study.Additionally, this study observed that the treatment interval does not impact MPR or pCR.There were no statistically significant differences between the two groups in this regard.This is consistent with previous research findings [19].With similar baseline characteristics, it was observed that the treatment interval did not affect the duration of surgery, the amount of bleeding, or the length of hospital stay.The findings suggest that undergoing surgery with a shorter treatment interval is safe. There was no statistical difference in the adverse reactions caused by neoadjuvant treatment between the two groups of patients, indicating that the patients' physical condition may not affect the differences of treatment interval and long-term outcomes.Patients were evaluated for surgical indications by the primary physician through a Multidisciplinary Team assessment 3-4 weeks after the completion of the last neoadjuvant treatment and then took about a week to complete the hospital admission and surgical procedures.This may result in variations in the treatment intervals for the patients.At the same time, due to patients' hesitancy about surgery, the time spent on re-examinations, scheduling surgery, and other personal reasons, some patients had a longer treatment interval. In this study, operation had a higher proportion of conversion to open surgery with a shorter treatment interval.Patients after neoadjuvant therapy might experience changes such as tissue edema, destruction of tissue gap structures, increased fragility of capillaries, and tissue adhesion caused by tumor shrinkage [20,21].We believe that appropriately extending the treatment interval might allow tissue edema to subside and interstitial spaces to reform, enabling surgeons to maintain the Fig. 4 Survival curve for overall survival original ooperation.Additionally, due to the sample size differences in the variables of whether to convert to open surgery during the operation, the stage, and pCR rate, we believe this might have influenced the results in the Cox regression.ypStage as well as MPR/PCR are confounding factors.Therefore, the ypStage variable was not included in the multivariate analysis.In the univariate analysis, Tumor MPR, pCR have a positive impact on DFS.This provides evidence for whether MPR and pCR can serve as surrogate endpoints in survival analysis.In this study, the proportion of males was significantly higher than that of females.Upon reviewing medical records, we found that this might be because many female patients had adenocarcinoma with gene mutations and ultimately underwent targeted therapy.Many patients with stage IB/IIA, due to comorbidities, were temporarily unable to undergo surgical treatment or chose neoadjuvant treatment due to hesitancy about surgery. Studies investigating the treatment interval for other types of tumors have yielded varying results.Du et al. [22] suggests that prolonging the treatment interval (> 8 weeks) in neoadjuvant chemoradiotherapy can improve the pCR rate in rectal cancer.Sanford et al. [23]suggests that delaying breast cancer surgery by more than 8 weeks after neoadjuvant chemotherapy can have a negative impact on OS.These studies used different approaches of neoadjuvant therapy, which may have influenced the results.However, the effect of treatment interval in neoadjuvant immunochemotherapy should be paid more attention. Liu et al. [4] discovered that there was an increased proportion of IFN producing lung tumor-specific T cells in neoadjuvant immunotherapy with shorter treatment intervals.Previous studies have demonstrated that the efficacy of neoadjuvant immunotherapy relies on CD8 + T cells and IFN [24].They propose that the timely removal of the primary tumor at the height of tumor-specific T cell expansion.During neoadjuvant immunotherapy, the appropriate timing for primary tumor resection might play a crucial role in the expansion and functionality of tumor-specific T cells (especially gp70 tetramer-specific CD8 T cells).The primary tumor serves as an essential source of tumor antigens, housing a significant number of tumor-specific gp70-T cells.If these cells remain in the primary tumor for an extended period, they might become functionally impaired or exhausted, losing their ability to target tumor cells.Exhausted T cells may exhibit increased inhibitory receptors (such as PD-1) and might not effectively respond to tumor antigens.By resecting the primary tumor at the right moment, these cells can be prevented from being trapped within the tumor.This allows them to migrate to other parts of the body, such as metastatic sites, enhancing the efficacy of neoadjuvant immunotherapy and potentially improving long-term survival rates.This provides some theoretical support for the impact of treatment intervals on survival.However, the mechanisms underlying the relationship between treatment intervals and survival outcomes in neoadjuvant immunochemotherapy are not yet fully understood.Further research is needed to elucidate this issue. In clinical practice, treatment intervals might not have received the attention they deserve, and there might not be standardized guidelines in place.However, patients could experience anxiety due to extended periods without undergoing surgery.Additionally, treatment intervals could potentially impact survival outcomes.We have identified and reported this clinical phenomenon, although the specific mechanisms are not yet clear.In the future, more research is needed in the future to study the optimal treatment timing, and it's crucial to enhance patient education and communication regarding treatment intervals and their potential implications.In clinical practice, it's necessary to reduce delays in treatment intervals caused by hospital admission procedures and individual patient reasons. Conclusion This study found that shorter treatment intervals were associated with better DFS.However, treatment intervals did not affect OS, pathological response, or surgical safety.Patients should avoid having a prolonged treatment interval between neoadjuvant immunochemotherapy and surgery. Limitations This study is a single-center retrospective study with a limited sample size.Future research should encompass multi-center studies to mitigate selection biases. Fig. 3 Fig. 3 Survival curves for Disease-Free Survival Table 1 Comprehensive data about neoadjuvant immunochemotherapy Table 2 Surgical-related information Table 3 Cox regression analysis for DFS
2024-05-15T05:34:03.501Z
2024-05-13T00:00:00.000
{ "year": 2024, "sha1": "2ed6cad4705d1c0c698a88316413be276152462e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0c5046fce6b926b2fcdaf5f934634f502aeb8868", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6312760
pes2o/s2orc
v3-fos-license
How to Predict Earthquakes with Microsequences and Reversed Phase Repetitive Patterns A strong earthquake is always preceded by groupings of shocks whose identification and understanding constitute a sound method for improving short-term earthquake forecasts. Thanks to a graphical method, we have identified and classified some microsequences and reversed phase repetitive patterns that precede the hazardous events. The seismic microsequences include a series of information useful to know in advance the beginning of energy release and accumulation phases that usually precede and follow a moderate-to-high magnitude earthquake. Their identification and correct interpretation allow us to determine various warning signals. In particular, through the analysis of their shape and position in the seismic sequence we can claim that the strongest earthquakes occur shortly after the formation of some peculiar micro-sequences. The checks carried out on large data sets related to earthquakes occurred in the past have shown that the analysis procedures developed do not depend on the size of the area analyzed while predicting a high percentage of moderate-to-high magnitude earthquakes. Introduction In any earthquake prediction study, it is essential to identify the precursory phenomena within the seismic data to be analyzed [1].To this end, it is necessary to analyze a solid scientific database to identify and classify the premonitory models that precede big seismic events, whose nature and shape, variable from one event to another, are probably controlled by the tectonic environment (geometries of the fault, strain rate) and the fault plan heterogeneity [2].The seismic sequence of a given area includes all the information needed to understand its evolution in space and time [3]; through an in-depth analysis of its inner structure it is possible to draw conclusions related to strong events prediction.In fact, the long-term sequence includes smaller-scale, medium-term sequences, which, in turn, contain short-and imminent-term sequences that influence the sequence evolution in relation to development speed and magnitude (space-time evolution). The analysis performed on several seismic sequences has identified in short and imminent-term time windows, particular groupings of shocks (microsequences and repetitive patterns), which anticipate a phase inversion [4].Usually, micro-sequences and repetitive patterns are formed before the beginning/end of the energy release energy, since the distribution of shocks over time is not random but follows rules that allow us to understand the development level achieved by the sequence and to make more reliable predictions. The analysis of the seismic sequence structure is performed using: a) a spatial coverage of variable datasets ranging from regional scale to international scale (or global, which covers an area ranging from few tens of km 2 to thousands of km 2 ); b) a range of magnitude values of 2.0 -10 M; (c) a range of depths of 1 -50 km. Microsequences TT-7S and DB-3SE We all are aware of the relative importance of the small and large earthquakes to change static stresses and trigger earthquakes.The small shocks observed before strong earthquakes, suggest that they have specific properties that can be used as of earthquakes precursors.Even though big earthquakes are unquestionably more important compared to smaller ones in relation to the energy release, small shocks collectively have the same influence [5]. Microsequences TT-7S (Triple Maximum-Seven Shocks) and DB-3SE (Double Minimum-Three small shocks) are graphic patterns that develop inside the seismic sequence and allow us to locate the point of closure/beginning of an energy release phase. In microsequence TT-7S (Figure 1), we identify a peculiar trend, which involves the magnitude values fluctuations that is completed during the energy release phase: it develops a first maximum (point 2) followed by a minimum value (point 3).Hereinafter, the magnitude values begin to rise again (point 4) reaching higher values than the previous maximum, and a second minimum (point 5), which generally has greater value compared to the previous one.The next step is characterized by a third peak (point 6) lower than the second maximum, followed by the last minimum (point 7) that is placed below the "transition line" [6] joining the minimum points 3 and 5, respectively.This completes the microsequence TT-7S [7]. Overall, the microsequenceTT-7S appears as a strong fluctuation of the magnitude values that occur and are completed during the energy release phase. In Figure 2 the green circles indicate the fourth shock in the microsequence whose magnitude M(4) must comply with the following condition: ( ) ( ) ( ) While the red circles indicate the points 2 and 6, respectively.The microsequence BD-3SE [4] show more reliable features for imminent-term predictions.It is characterized by three small shocks fluctuations within two shocks of greater magnitude (Figure 3), where their completion establishes both the reversal of the energy accumulation phase and a critical condition, in which the successive small shocks trigger either average magnitude or catastrophic events.The microsequence's characteristic lies in the presence of a double minimum (points 1 and 3), and can be either symmetric, if the two minimums have the same magnitude value (Figure 3(a)) or asymmetrical, if the two minimums values are different (Figures 3(b)-(e)). The microsequence BD-3SE is identifiable on charts concerning both the daily/monthly magnitude values trends or the progressive number of shocks, and generally they are completed after a very short period.Conversely, the release phase is active for one to eight days/months or eight events (in relation to the sequence's development time) after the trigger point, which is represented by the second minimum of the microsequence DB-3SE (red triangle).After the trigger point, it is possible that the magnitude values progressively increase up to the final target, thus generating, a progressive earthquake-or flash-earthquake type energy release phase [4]. The procedure for identifying the microsequence DB-3SE is the following: 1) Identify the absolute maximum and relative values P(n), which represent the extreme points (Figure 4); 2) Identify the extreme points P 1 (n) (second shock in the microsequence DB-3SE) whose magnitude M 1 (n) must comply with the following conditions: ( ) ( ) ( ) where M 1 (n − 1), M 1 (n + 1) are the magnitude values of the extreme point that precedes and follows the point P 1 (n), respectively (Figure 5). 3) Draw the line joining the second shock in the microsequence DB-3SE (blue line) (Figure 6); 4) Draw the line joining the fourth shock in the microsequence TT-7S (black line).During a seismic sequence development it is possible to identify the beginning of an energy release phase by assuming the following condition: If ( ) ( ) is the beginning of an energy release phase (yellow triangles).Where M E (n) is the magnitude of the fourth shock P E (n) in the microsequence TT-7S that must be confirmed by the following condition as well: AB BC < where AB is the temporal distance or number of shocks between the point P E (n) (yellow triangle) and the second shock in the microsequence DB-3SE (red circle), while the segment BC represents the temporal distance or the number of shocks between the microsequence DB-3SE and the last recorded event during the current phase. The AB > BC condition indicates an ongoing energy accumulation phase, while the AB = BC condition is regarded as the "watershed" between the energy accumulation and release phases. In general, the magnitude value of the second shock in the micorsequence DB-3SE corresponds to the minimum expected magnitude (M MI ) during the energy release phase, while an indicative value of the maximum expected magnitude (M MA ) shall be calculated by adding the difference between the magnitude of the latter point and the minimum expected magnitude value (M MI ) to the magnitude value of the fourth shock in the microsequence TT-7S (point P E (n)). In order to obtain a more accurate method, however, in some cases it is necessary to use other types of signals to confirm the ongoing phase.For example, a clear signal of phase inversion is given by the fluctuations of the red line (Figure 7) that joins the extreme points of the seismic sequence to the second shock in the microsequence DB-3SE.In the lower part of the graph, the amplitudes of the imminent-short term energy accumulation and release phases are indicated by red and green horizontal bars. From the graph it is possible to infer the following information: 1) The microsequence DB-3SE indicates the end of the energy accumulation phase and the beginning of the energy release phase (point A). 2) Point B represents an imminent-short term foreshock and the first maximum in microsequence TT-7S. 3) The point C represents the mainshock or a stronger shock and the second maximum in microsequence TT-7S. 4) The point D represents the first shock of the energy accumulation (aftershock) and also the third maximum in microsequence TT-7S. The seismic sequence development takes place according to an evolution that follows this pattern: after the microsequence DB-3SE formation (point A), a single shock of greater magnitude than the second shock in microsequence DB-3SE or more shocks whose magnitude increases over time (points B and C) may occur.In point C the closure of the energy release p occurs if the third maximum in the microstructure TT-7S is formed (point D), which may coincide with a second mainshock or with an imminent-term aftershock (yellow circle), while the imminent-term energy accumulation phase ends with the formation of a microsequence DB-3SE. Warning Signs A traditional use of the microsequence DB-3SE involves the triggering of a first alert in correspondence of the occurrence of the third shock it consists of.However, this method does not allow to know the extension of the energy release phase. A more reliable method is to associate alerts in increasing order with the microsequence DB-3SE and with the shocks that follow (Figure 8).For example, the third shock in microsequence DB-3SE that precedes an extreme point P E (n) of magnitude ( ) ( ) is associated with a first-order alert (green triangle), while the extreme point P E (n) is associated with second order alert (yellow color). The imminent-term, higher order signals (red triangles) are associated with microsequences DB-3SE that forms after the second order signal. The graph clearly shows how the warning signs occur following a repetitive 1-2-3 or 1-2-3-4 order that allows to follow the sequence development during the energy release phase and to establish the levels of danger that grow over time. It is possible to use a different system to define the various order warning signs, using the time/number distance of shocks between the extreme point P E (n) of magnitude ( ) ( ) and the second shock in microsequence DB-3SE.The extreme point P E (n) constitutes a first order warning sign, the third shock in microsequence DB-3SE is a second-order warning sign, while the point C, placed at a distance from the point B equal to AB, represents a third order alert (Figure 9). Repetitive Patterns A strong earthquake is preceded by shocks of different magnitude organized in repetitive patterns that anticipate the energy release phase, whose identification and understanding constitute a sound method to make short and medium term predictions.The method for identifying the repetitive patterns consists in drawing the line that joins the fourth shock in microsequence TT-7S (magenta-colored upper line) and the line joining the low extreme points (magenta-colored bottom line) whose magnitude values must comply with the following conditions (Figure 10): ( ) ( ) ( ) The distinctive feature of the band is that the spaces between the top and the bottom line will vary as a function of the ongoing phase.Basically, the fluctuation band provides information on the ongoing phase in the seismic sequence: if the sequence is currently in an energy release phase then the band undergoes a progressive widening, while if it is in an accumulation phase, the band undergoes a shrinkage.In particular, during the activation step of the energy release phase the band shrinks further to contain the small shock. The flattening of the fluctuation band usually means that a trigger point of the energy release phase is about to form. If the magnitude values begin to exceed the interpolation line of the extreme points values (solid blue line), it means that the motion will continue usually upwards (with increasing magnitude values), while a downwards crossing will indicate the presence of an energy accumulation phase. By default, after a strong earthquake the magnitude values always tend to return below the straight interpolation line. The band upper line and the straight interpolation line movements provide some information on the magnitude values of the strongest earthquakes. The method adopted involves the drawing of the parallel line to the interpolation straight line (green line) from the minimum point of the oscillation band upper line (point A) and the projection of a second parallel (line red) on it, whose amplitude is equal to that between the interpolation straight line and the lower parallel line.The magnitude values supplied by the second parallel line over time may be assumed as the strongest earthquakes' expected target. Figure 11 shows the five repetitive patterns consisting of four shocks (two maximum and two minimum) generated by the oscillation band of the extreme points. Models a-b-c will form at the end of a energy accumulation cycle, while models d-e during the energy release cycle (they are the most important, because the shock S3 is a foreshock). In addition, the pattern are followed by warning signals generated by the microsequence DB-3SE that anticipate the strongest shocks. Results In Figure 12 shows the results obtained from the application of the methodology explained above to the seismic sequence of the earthquake occurred in 1989 in Loma Prieta.The main shock occurred on 18 October 1989 (6.9 Mw) was preceded by a progressive earthquake-type energy release phase [6] characterized by a short-term foreshock of magnitude 5.4 ML recorded on 08 August 1989 (point F1) and preceded by a fifth-order imminent-term warning sign generated by the microsequence DB-3SE. The subsequent evolution of the seismic recordings over time shows the development of an impending-term accumulation phase consisting of shocks of magnitude less than 3.5 M, some of which are organized in microsequences DB-3SE that have generated various-order warning signs.In particular, we note the formation of a first-order signal (point 1) generated by the third shock in microsequence DB-3SE, which has formed after the first foreshock (point F1), followed by the second-order signal generated by the extreme point (point 2) and by subsequent points (points 3 and 4).The latter preceded respectively the second foreshock (point F2) and the mainshock with magnitude of 6.9 Mw, generated by the third shock in microsequence DB-3SE.In addition, prior to the energy release phase, the oscillation band shrinks, while after the first foreshock it expands. Even in this case, the oscillation band formed by the extreme points lines (black line) and by the second shocks in microsequence DB-3SE (blue line) can be used to identify phase inversion patterns (Figure 13).In fact, this oscillation band is an excellent oscillator to understand when the seismic sequence reaches a critical stage where small shocks can trigger shocks of greater magnitude.Essentially, when the band narrows, we reach the terminal step of the energy accumulation phase: this may suggests that in the short term the magnitude values could begin to grow.The band provides also significant warning signs (you should always bear in mind that this is a predictive analysis, i.e. it tries to predict future events and therefore not yet occurred) generated by the intersection between the descending sections extensions of the line joining the extreme points and the ascending sections extensions of the line joining the second shock in the microsequence DB-3SE (red triangle). The magnitude minimum value (M MI ) expected in the energy release phase and associated with the warning sign, corresponds to the magnitude value of the extreme point that precedes the signal (green circle). In order to further reduce the errors related to the interpretation of the signals which inevitably the procedure can generate, we recommend the adoption of one or more filters as the simplified Aroon oscillator [4], with the purpose of identifying the energy accumulation areas in relation to the data provided by the oscillation band.Starting from the previous example, in the graph shown in Figure 14 various order warning signs are reported (green, yellow and red triangles), which were identified through the distance between the extreme point (black circle) and the second shock in microsequence DB-3SE (red circle). The first order warning sign coincides with the extreme point.The main shock has been triggered by an asymmetric microsequence DB-3SE (second order signal), while the third-order signal, placed at a distance BC = AB (point C) has formed a few seconds before the mainshock. The graph displayed in Figure 15 shows some details of the microsequences TT-7S that have formed during the energy release phase and after the most hazardous seismic events. In particular, it is possible to observe two imminent-term microsequences TT-7S in which the second maximum coincides with the most energetic shocks (foreshock and mainshock), while the third maximum, which closes the energy release phase, is represented by an aftershock.The transition from the energy release to the energy accumulation phase is also confirmed by the magnitude values decrease below the "transition line" (dashed blue line) [6]. Conclusions A seismic sequence that occurs in a given area consists of microsequences and repetitive patterns formed by a grouping of seismic events with particular features.Their analysis allows studying the short-term evolution of a seismic sequence by identifying the warning signals that precede a strong earthquake. In particular, the microsequence TT-7S allows us to locate the closure point of an energy release phase and identify a second order warning sign. The microsequence DB-3SE, identifies the beginning of the energy release phase and explains how from low energy earthquakes moderate to high magnitude events can result, whose occurrence time can be known with sufficient advance. The band allows us to understand when the seismic sequence is located within an energy release or accumulation phase; therefore if it reaches a critical stage where small shocks can trigger shocks of greater magnitude. The aforementioned method, based on the microsequence analysis and on repetitive patterns can be easily applied to the study concerning earthquakes forecasts both at regional and global level.The method is based on a simple graphical method that can be used through the data of different seismicity catalogues. 7 T ra n s it io n li n e Figure 3 . Figure 3. Schematic representation of microsequence DB-3SE.The red circles indicate the microsequence, while the red triangle indicates the trigger point of the energy release phase.The black circles represent the microsequence's starting and completion shocks. Figure 4 . Figure 4. Identification of absolute and relative maximum values (black circles). Figure 5 .Figure 6 . Figure 5. Procedure apt to identify the microsequences DB-3SE.In the red ellipse an example of microsequence DB-3SE is shown.The red circle indicates the second shock in the microsequence.The black circles indicate the extreme points (absolute and relative maximum values)-Earthquake occurred in China on 13 April 2010. Figure 7 . Figure 7. Alternating accumulation and release of energy phases. Figure 8 . Figure 8. Various orders warning signals.The red star indicate the strongest earthquakes. 5 )Figure 10 . Figure 10.Procedure to identify the repetitive patterns.We observe some structures of repetitive patterns based on the type shown in Figure 11.The red triangles indicate the third-order warning signs generated by the microsequence DB-3SE. Figure 13 .Figure 14 . Figure 13.Warning signs (red triangles generated by the oscillation band of the extreme points-microsequence DB-3SE. Figure 15 . Figure 15.Microsequence TT-7S in energy release phase.The red symbols indicate the three maximum values in the microsequence.
2018-02-17T18:11:37.660Z
2016-07-09T00:00:00.000
{ "year": 2016, "sha1": "a1586c21226902b1f79d853524d99c9aa6cf517c", "oa_license": "CCBY", "oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=68107", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a1586c21226902b1f79d853524d99c9aa6cf517c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geography" ] }
1491858
pes2o/s2orc
v3-fos-license
Interaction of Stellate Cells with Pancreatic Carcinoma Cells Pancreatic cancer is characterized by its late detection, aggressive growth, intense infiltration into adjacent tissue, early metastasis, resistance to chemo- and radiotherapy and a strong “desmoplastic reaction”. The dense stroma surrounding carcinoma cells is composed of fibroblasts, activated stellate cells (myofibroblast-like cells), various inflammatory cells, proliferating vascular structures, collagens and fibronectin. In particular the cellular components of the stroma produce the tumor microenvironment, which plays a critical role in tumor growth, invasion, spreading, metastasis, angiogenesis, inhibition of anoikis, and chemoresistance. Fibroblasts, myofibroblasts and activated stellate cells produce the extracellular matrix components and are thought to interact actively with tumor cells, thereby promoting cancer progression. In this review, we discuss our current understanding of the role of pancreatic stellate cells (PSC) in the desmoplastic response of pancreas cancer and the effects of PSC on tumor progression, metastasis and drug resistance. Finally we present some novel ideas for tumor therapy by interfering with the cancer cell-host interaction. Introduction In the United States of America, Europe and Japan, the incidence of pancreatic cancer has risen slowly during the last few decades. Pancreatic ductal adenocarcinoma (PDA) is now the fourth leading cause of cancer related death among both men and women in the U.S. [1]. Because this cancer shows no symptoms in its early stage and has therefore a low probability of diagnosis, approximately 80-90% of the patients present with local infiltration or metastatic disease at the time of initial diagnosis [2]. Therefore only 15-20% of the patients are candidates for surgical resection, which is the only chance for cure. The median survival time of patients with metastatic pancreas cancer is <6 months and the overall five-year survival rate is less than 5%. More than 90% of the pancreas cancers represent PDA, which are characterized by a late detection, an aggressive growth, local invasion into adjacent tissue, a rapid progression, early systemic dissemination [3], late diagnosis and a relative resistance to conventional chemo-and radiotherapy [4,5]. After surgical resection local recurrence occurs in the majority of patients. In addition, a strong -desmoplastic reaction‖ is characteristic for PDA [6,7]. Fibroblasts, myofibroblasts and activated stellate cells produce the different connective tissue components such as collagens, fibronectin and proteoglycans [8]. As shown by Immamura et al. [8] in pancreas cancer and tumor associated chronic pancreatitis, the collagen content is 3-fold higher compared to normal pancreas. In addition, the proportion of the collagen types I, III and V is comparable to ethanol induced chronic pancreatitis, tumor associated chronic pancreatitis and pancreas cancer [8]. Whereas in pancreas cancer, collagen synthesis is associated with spindle shaped cells (fibroblasts and myofibroblasts), matrix-metalloproteinases (MMPs) and tissue-inhibitors of MMPs (TIMPs) are produced by both, stromal and tumor cells [9]. In his -Frank Brooks Memorial State of the Art Lecture in basic Sciences‖ at the 2001 Annual APA-meeting, M.G. Bachem presented data for the first time indicating an interaction of PSC with tumor cells. One year later, Yen et al. [10] described a pronounced increase in the number of α-smooth muscle actin positive cells in PDA and suggested that these cells might represent activated PSC producing the connective tissue surrounding carcinoma cells. Thereafter several research groups studied the role of PSC in pancreas cancer [11][12][13][14]. Pancreatic Stellate Cells In an earlier report, fibroblast-like cells were suggested to be responsible for the collagen synthesis resulting in pancreas fibrosis [15]. However, as shown later, the matrix producing cells in pancreas express α-smooth muscle actin (αSMA) and show similarities to activated hepatic stellate cells (HSC) or myofibroblast-like cells [16,17]; for a review see [18]. Normal fibroblasts do not express desmin or αSMA. In addition, by using microarray technology to analyze the gene expression profile of (i) normal fibroblasts; (ii) activated HSC and (iii) activated PSC, respectively, significant differences between stellate cells (HSC and PSC) and fibroblasts could be demonstrated [19]. Vitamin A storing cells have been found in different organs of vertebrates such as the liver, pancreas, kidney, lung, skin and others. HSC have long been known to play a major role in repair after injury and in liver fibrogenesis [20,21]. The cellular vitamin A content, primarily retinyl palmitate, is visible during excitation with UV-light as a rapidly fading fluorescence. Some cytoskeleton proteins might also be used to identify these cells (see Figure 1). PSC are of mesangial origin but, as we have learned recently in injury and cancer, part of these cells also originate from bone marrow (see below). Figure 1. Characteristics of quiescent (fat-storing phenotype) and activated PSC (myofibroblast-like phenotype). In acute and chronic pancreatitis, but also in pancreas carcinomas, PSC change their phenotype from a quiescent fat-storing phenotype to a highly active myofibroblast-like phenotype. Hereby the cells lose their Vitamin A containing fat droplets, express other cytofilaments, increase their proliferation rate and produce growth factors, cytokines, connective tissue, MMPs and TIMPs. In addition, as we have learned recently from animal models, part of the activated PSC originate from stem/progenitor cells of bone marrow. Vitamin A storing cells in the pancreas were first described in the year 1982 by Watari et al. using electron and fluorescence microscopy of mice pancreas tissue after vitamin A loading [22]. A few years later, vitamin A storing cells were found in normal human and rat pancreas and in fibrotic human pancreas [23]. In 1998 we, and the Apte-Wilson-Group in Sydney, isolated and characterized vitamin A storing stellate-shaped cells from rat and human pancreas [16,17]. Because of their similarity to HSC we named the cells pancreatic stellate cells [17]. In normal pancreas low numbers of quiescent fat-storing PSCs can be detected interlobular and in the periacinar space [16,17]. Comparable to the stellate cell-activating mechanisms in liver injury, also in acute and chronic pancreatitis and in pancreas cancer (but also in primary culture), the cells are activated and change their phenotype (Figure 1). The fat storing phenotype of PSC is quiescent (low mitotic index, low capacity to produce matrix and growth factos), has numerous perinuclear fat droplets containing retinyl-palmitate and expresses the cytofilaments vimentin, desmin, glial fibrillary acidic protein (GFAP), Nestin and synemin ( Figure 1). In pancreas injury (e.g., acute and chronic pancreatitis), but also in pancreas carcinoma [24], the quiescent fat-storing phenotype of PSC loses its retinoids, develops a prominent rough endoplasmic reticulum and transforms into a highly active matrix producing myofibroblast-like cell ( Figure 1). This cell type is primarily found in interlobular fibrotic areas or adjacent to carcinoma cells. The activated PSC (myofibroblast-like) are vimentin and αSMA positive, have a high mitotic index, express the receptors for PDGF and TGFß, express ICAM-I, are able to contract and move, produce the extracellular matrix components collagen I, III, XI, fibronectin and periostin, also synthesise MMPs and TIMPs and release the growth factors PDGF, FGF, TGFß1, CTGF, IL1ß, IL-6, IL-8, IL-15, Rantes, MCP1, ET1 and VEGF (see Figure 1). In addition, PSC which have been isolated from pancreas carcinomas also contain lipid droplets [14] and express vimentin and αSMA ( Figure 2). These tumor derived PSCs also produce collagen I and III, fibronectin, growth factors, and proteases in significant amounts [13,14,24]. Cell culture experiments have shown that TGFß, TNFα, IL-1, IL-6, IL-8, ethanol, acetaldehyde, and oxidative stress stimulate the transformation from the fat storing phenotype to the myofibroblast-like phenotype ( Figure 1). Activated PSC are stimulated by injured acinar cells [25], aggregating platelets, inflammarory cells, ethanol and acetaldehyde to proliferate, produce matrix, and MMPs [26], and synthesize growth factors and cytokines ( Figure 3) ( [27]). The most important paracrine factors stimulating fibrogenesis in activated PSC are TGFß1, FGF, PDGF, ET-1, and acetaldehyde. TNFα, IL-1, TGFß, and IL-6 are related to ECM degradation and remodeling ( Figure 3). Because activated PSC synthesize TGFß1, CTGF, PDGF, ET-1, IL1, IL6, IL8, activin-A, periostin, and COX-2, autocrine stimulatory loops might also play a role in chronic pancreatitis and PDA ( Figure 3). Additionally, activated PSC also produce IL15, which reduces lymphocyte apoptosis, again leading to further PSC activation [28]. Accumulating evidence suggests that part of the PSC in fibrotic pancreas are derived from bone marrow [29,30], express several stem/progenitor cell markers such as CD34, Nestin, p75NTR, GFAP, Bcrp1, Aldh, Notch (for review see [31]), and are able to differentiate into other pancreatic cell types [32]. Pan and coworkers have shown that bone marrow-derived progenitor cells contribute to the stellate cell and inflammatory cell population near metaplastic tubular complexes and carcinoma cells [33]. Based on their data, these authors suggest that bone marrow-derived progenitor cells could influence pancreatic cancer growth by modulating tumor microenvironment [33]. Cell-Cell Interactions between PSC and Carcinoma Cells Experimental and clinical data indicate that acute and chronic pancreatitis represent potent risk factors for PDA [34][35][36]. Tissue injury (e.g., acute pancreatitis) [37], oncogene activation like Hedgehog/Ras [38][39][40] or Notch [41,42] and TGFα activation [43] induce and accelerate acinar-ductal metaplasia and PanIN development. Notch activity is thought to play a major role in cancer development because TGFα induced acinar-to-duct conversion requires Notch activation [42], and Notch and Kras coactivation cause a rapid acinar-to-duct-like phenotype [41]. Chronic pancreatitis also accelerates Kras-driven PanIN and PDA development [34]. In addition, accumulating evidence indicates that in tissue injury, and pancreatitis activation of PSC, is linked to tumor progression. Erkan and colleagues quantified the extent of activated stromal cells in situ by quantifying what they called the -activated stroma index‖ (ASI) [44]. They stained consecutive tissue sections of cancer patients with antibodies against aSMA or with aniline blue revealing collagen deposition. What they observed was that a high coefficient of aSMA / collagen staining correlated with a poor prognosis and vice versa. This indicates that high PSC numbers and a high activation grade of PSC (strong aSMA staining), but not a strong desmoplastic reaction (collagen synthesis), are related to tumor progression. The cell type that gives rise to precursor lesions, termed pancreatic intraepithelial neoplasia (PanIN), is still in debate. Pancreatic duct epithelial cells [45] or centroacinar cells [46,47] have been suggested as cancer-initiating cells, but evidence accumulates that acinar cells represent the bad guys [41,[48][49][50]. There are also hints that PSC might participate in the initiation of PDA [51]. Beside the growth factors mentioned above, other factors might also be involved in CC-PSC interaction and contribute to cancer progression (see Figure 4). Firstly, PSC store retinaldehyde esters within lipid droplets, which may be oxidized by aldehyde dehydrogenases to retinoic acid (RA). Upon activation of PSC, the lipid droplets disappear, thereby releasing their contents. RA promotes cell differentiaton-a physiological process needed to maintain tissue homeostasis, e.g., for restricting acinar cell proliferation. Following continuous activation of PSC though, RA stores may be depleted and proliferation of acinar cells may continue once initiated. Secondly, collagen I, which is produced by activated PSC, has been shown to directly weaken E-cadherin mediated cell-cell interactions of tumor cells, and to stimulate cell proliferation by b-catenin-mediated up-regulation of c-myc and cyclin D1 expression [52]. Additionally, collagen I was shown to increase N-cadherin expression and the metastatic potential of CC in vivo [53]. In this context, one might speculate that, for example, phagocytosis of necrotic acinar cells [54] or bacteria [55] by PSC may lead to local activation of PSC and collagen production, finally stimulating acinar cell proliferation. In adult mice chronic pancreatitis has been shown to be essential for the induction of ductal adenocarcinomas by K-Ras oncogenes [34], and from humans it is known that both idiopathic and alcoholic pancreatitis are associated with a 15-fold higher risk of developing pancreatic cancer [56]. In summary, the local activation of PSC, simply as a result of local tissue injury, may in turn ‗accidentally' initiate or promote cell proliferation of acinar cells. Figure 4. Interaction of PSC with pancreas carcinoma cells. Pancreas carcinoma cells (CC) accelerate transformation of quiescent PSC to the myofibroblast-like phenotype. This cell type is attracted by CC and stimulated to proliferate, produce ECM and growth factors. PSC stimulate angiogenesis, CC proliferation, chemoresistance, invasion, and motility. In addition, PSC reduce anoikis/apoptosis of CC. AM, adrenomedullin; SDF-1, stromal cell-derived factor-1. To study PSC-CC interactions, we performed co-culture experiments of CC and PSC (see Figure 5) or stimulated cultured PSC with CC-supernatants and vice versa. The results and the data of others regarding the interaction of PSC with carcinoma cells are summarized in Figure 4. By producing TGFß1 and other fibrogenic mediators, pancreas CC stimulate the transformation of the quiescent fatstoring phenotype of PSC to the highly active myofibroblast-like phenotype. In addition, CC attract PSC (Figure 5c,d) and stimulate motility, proliferation and matrix synthesis of PSC [14,27]. The result of this stimulation is a strong desmoplastic reaction surrounding carcinoma tissue [13,14]. The activated PSC proliferate strongly in response to PDGF, IGF1, and ET-1. Migration of PSC is stimulated by PDGF, matrix synthesis is stimulated primarily by TGFβ1, FGF-2, and sonic hedgehog. In addition, CC induce MMP synthesis via the release of IL1, TGFß1, TNFα, and EMMPRIN. In particular MMP-2, MMP-9, and plasminogen-activator (uPA) are involved in tumor dissemination. As shown by Gress et al., MT1-MMP, MT2-MMP, MMP-2, MMP-9 are strongly expressed in pancreas cancer [9]. Recent data from our group shows that PSC significantly contribute to MMP-2 secretion in the desmoplasia of pancreatic cancer in vivo and in vitro [63]. MMP-2 staining was found primarily in PSC adjacent to cancer cells [14] and secretion of MMP-2 by PSC by far exceeds that of cancer cells [63], although pancreas carcinoma cells express some MMP-2. Furthermore, there is evidence that cancer cells induce uPA expression in stromal cells, which then bind to the urokinase receptor (uPAR) expressed on cancer cells (for review see [64]). After binding, uPA converts plasminogen to plasmin, which then degrades fibrin, collagen IV, fibronectin and laminin. Beside the action of MMPs, this also enables tumor cells to migrate through tumor surrounding ECM. In particular, MMPs are suggested to play an important role in cancer progression that is in early metastasis, angiogenesis, and release of growth factors from ECM [65,66]. Early trials with MMP inhibitors though did not result in significant benefits for cancer patients, probably due to their application in late stage cancer, their usage as monotherapy, and missing information about good and bad responders [66]. However, new target-specific inhibitors are being developed [67,68]. Data from our group [14,63] and from the lab of H. Fries [69] have shown that CC express EMMPRIN (Extracellular Matrix Metalloproteinase Inducer) and thereby stimulate the synthesis of MMPs in PSC. EMMPRIN, also named Basigin or CD147, which is a type I transmembrane glycoprotein, has been extensively studied because it is involved in tumor cell migration and invasion [70][71][72][73], apoptosis [74,75], angiogenesis [76][77][78], and chemoresistance [79,80] in a variety of cancers (reviewed by [81]). In pancreatic cancer, serum levels of EMMPRIN are elevated compared to healthy volunteers, though serum levels do not correlate with TNM status [69]. Experiments in nude mice showed that EMMPRIN promotes tumor growth of CC in vivo [82]. Most effects of EMMPRIN have been reported to be mediated by the induction of MMPs. For example, positive correlations between the expression of MMPs and EMMPRIN in various cancers can be found in numerous reports [83,84]. Furthermore, co-culture of cancer cells with stromal fibroblasts induces MMPs, which can be blocked by EMMPRIN antibodies [72,85,86]. In addition, there is a positive feedback regulation of EMMPRIN and MMP-dependent generation of soluble EMMPRIN [87]. Finally, the up-regulation of MMP expression in PSC by CC-derived EMMPRIN accelerates tumor growth in vitro and in vivo [63]. Because of the central role of EMMPRIN in tumor progression, antibody-or siRNA-based therapies have already been tested in mouse models of various cancers with some success [82,88,89]. However, some evidence exists that EMMPRIN does not induce MMP-synthesis in certain cell systems, tumor types, and animal models. In murine melanoma cells, for example, knockdown of EMMPRIN did not reduce the tumor cell-mediated induction of MMP-2, -9, and -14 both in vitro and in vivo [90], but impaired angiogenesis and metastasis formation [77]. EMMPRIN exists as high and low glykosylated isoform. Their proportion is partly regulated by the interaction of EMMPRIN with caveolin-1 in the Golgi complex [91] and varies between different pancreas CC lines [63,69]. In order to mediate intercellular signals, the transmembrane protein EMMPRIN can be solubilized from the cell membrane by microvesicle shedding [92,93] or by proteolysis of the extracellular part of EMMPRIN by MMPs [87,94]. PSC also express varying amounts of EMMPRIN [63,69]. This is of interest, because so far there is no convincing data identifying the receptor for EMMPRIN. The most favored mechanism of action of EMMPRIN is the homophilic interaction of soluble EMMPRIN and membrane-bound EMMPRIN [86,95]. This interaction and the consecutive induction of MMPs can be blocked by unglycosylated recombinant EMMPRIN-the ineffective form of the protein-or by antibodies against EMMPRIN [86]. The main obstacle of this proposed mechanism is the way by which the signal is transduced into the cell. Obviously, EMMPRIN itself has no kinase or phosphatase activity and is not coupled to ion channels or any other known signal transduction molecules. However, following addition of soluble EMMPRIN to cultured cells activation of downstream mediators such as p38-MAPK [96], SAPK/JNK [73], or phospholipase A2 and 5-lipoxygenase [97] have been described. Another possibility of EMMPRIN signaling might be endocytosis of dimeric or multimeric EMMPRIN (following binding of soluble EMMPRIN) with other components, for example cyclophilin B, which has been shown for the entry of measles virus into epithelial cells [30]. Recently, in vitro and in vivo experiments have revealed the interaction of EMMPRIN with monocarboxylate transporters, proteins, which confer lactate export from cells in tissues with reduced oxygen supply like solid tumors [82], providing an additional explanation for the tumor promoting effect of EMMPRIN in conjunction with the Warburg hypothesis [98]. Another way EMMPRIN can exert its effects is by the direct binding of MMPs to the tumor cell surface and subsequent local organization of the enzymes in the cell membrane (e.g., a directed localization towards invadopodia), which has been shown for lung carcinoma cell-derived MMP-1 [99]. This strongly accumulated MMP could again locally activate further MMPs, for example PSC-derived MMP-2 and MMP-9. Mutual activation and inactivation of MMPs plays an important role in regulation of MMP activity (for review see [100]). Additionally to EMMPRIN (and other factors), these proteases are also tightly regulated by tissue inhibitors of MMPs (TIMPs). In PDA, an imbalance of MMP and TIMP expression compared to healthy controls has been observed [101]. As tumor cells proliferate, the total amount of MMPs (inactive pro-MMPs as well as active MMPs) would concomitantly increase and further activate PSC-derived MMPs. Hereby, a positive feedforward loop on ECM remodeling is generated. By producing multiple cytokines and growth factors including PDGF, FGF, TGFβ1, CTGF, IL-1β, IL-6, IL-8, SDF-1, Rantes, TNFα, MCP-1, and ET-1 activated PSC also influence proliferation, motility, invasion, and chemoresistance of CC (see Figure 4). In addition, via the production of VEGF, PDGF, FGF1, FGF2, IL-8, coll-I, periostin, adrenomedullin, prokineticin-1, MMPs, and uPA, activated PSC also promote angiogenesis (see Figure 4). CC proliferation is stimulated via the growth factors TGFß, FGF2, and PDGF [13,14], invasion via the production of MMPs, and motility via PDGF and EGF. These data have been confirmed by C. Logsdon's group which has shown that PSC supernatants increased tumor cell proliferation, migration, invasion, and colony formation [102]. Furthermore, gemcitabine and radiation therapy were less effective in tumor cells treated with PSC supernatant. Although all the responsible mediators have not been identified as yet, activation of MAPK and Akt pathways have been observed after addition of PSC supernatant to cultured tumor cells [102]. Very recently X. Wang's group identified another soluble factor in PSC supernatant (stromal cell-derived factor-1 = SDF-1) promoting proliferation, migration and invasion of cancer cells [103]. In addition, very interestingly the presence of PSC increased the incidence of tumor formation when limiting numbers of CC were orthotopically injected into nude mice [102]. These data were confirmed by M. Apte's group using orthotopically transplanted MiaPaCa-2 cells in combination with PSC from different human donors [104]. They identified PDGF as the primarily responsible factor for the tumor promoting effects of PSC (in vitro). Interestingly, the group also reported the existence of SMA-positive human cells in liver nodules together with a higher incidence of distant metastases on transplantation of CC with PSC than without (50% vs. 10%), indicating that some PSC might have comigrated with the CC. Our in vivo data, showing the induction of subcutaneous tumors in nude mice, also indicate that, in the presence of PSC, tumor progression is accelerated [13,63]. Others have shown that direct contact of tumor cells with PSC activated the Notch signaling pathway and resulted in even stronger stimulation of tumor cell proliferation compared to stimulation by supernatant [105]. In summary, all these observations support the hypothesis that PSC provide a microenvironment which is advantageous for tumor cell growth and survival. Interestingly, several growth factors, such as TGFß, FGFs, PDGF-BB and insulin-like growth factor-I (IGF-1) are sequestered within the ECM, which thus acts as a sponge for these factors [105]. Proteases from CC or MMPs from PSC (which are induced through EMMPRIN from CC) might degrade the ECM and release these bound growth factors. As shown in Figure 6, we designed a set of experiments to demonstrate the release of growth factors by degradation of ECM. Activated PSC were cultured in 6-well plates until confluency. Then, the medium was changed and in the absence of fetal calf serum new medium was conditioned for three days. This PSC supernatant was added to cultured pancreas CC (Panc1 and SW850) and proliferation of the CC was quantified by BrdU-incorporation (see Figure 6a and 6b). Cultured PSC were washed and then lysed using distilled water. After a further three washing steps, the remaining ECM was degraded at 37 °C by addition of 2 mL CC supernatant (containing proteases and MMPs). The degraded matrix was then added to cultured CC and proliferation was again measured by BrdU incorporation (Figure 6a and c). In addition, PSC supernatant and degraded PSC-matrix were preincubated for 1 h with neutralizing antibodies against TGFß1, bFGF, and PDGF-AB. As shown in Figure 6a and 6b, PSC-conditioned medium (PSC-SN) and degraded PSC matrix both stimulate the proliferation of Panc1 cells by almost 40% compared to control. Preincubation with neutralizing antibodies against TGFß1 and bFGF identified both factors as mitogens for CC. Similar results were obtained using SW850 cells instead of Panc1. In summary, these experiments demonstrate that (i) TGFß1 and bFGF are produced by cultured PSC; (ii) both factors are sequestered in the ECM and might be released by matrix degradation; (iii) both factors stimulate proliferation of pancreas carcinoma cell lines. Role of PSC in Chemoresistance As mentioned above, surgical resection of PDA in combination with chemo-or radiotherapy is the only chance for cure. However, only 15-20% of the patients are candidates for surgical intervention. Another problem is the development of chemo-and radioresistance in PDA. The synthetic nucleoside analogue gemcitabine, which is the most commonly used drug for chemotherapy of PDA, is a prodrug, which needs to be transported into the tumor cells followed by activation through various enzymes. Recent data show that, among these proteins, a high expression of equilibrative nucleoside transporter 1, which enables the entry of the compound into the cells, correlates with higher survival in gemcitabinetreated patients [107]. Within cells, gemcitabine is phosphorylated by various enzymes and inhibits ribonucleotide reductase and DNA synthesis [108], but phosphatases, like 5'-nucleotidase, or cytidine deaminase, which irreversibly deactivate the drug, again may limit this process. Additionally, as one might expect, there is a positive correlation between the expression of anti-apoptotic genes of the Bcl-2 family, like Bcl-XL, and resistance of CC against gemcitabine or 5-fluorouracil [109]. Up-regulation of these anti-apoptotic genes is mainly conferred by mutations of nuclear factor kB (NF-κB), variations in signaling pathways upstream of NF-κB, or continuous autoand paracrine stimulation of NF-κB activity by various growth factors and cytokines. Inhibition of NF-κB increased apoptosis and thereby reduced chemoresistance [110]. Participation of PSC in the above mentioned mechanisms has not been investigated so far, but there are several observations indicating that PSC might play a significant role in both the manifestation and progression of chemoresistance. PSC-conditioned medium, for example, directly protects CC against the cytotoxicity of gemcitabine: Only 9% of BxPC3 cells in the presence, compared to 34% of cells in the absence, of PSC-conditioned medium were TUNEL-positive after 48 hours treatment with 100 µM gemcitabine [102]. Among others, CC express IL-1, which confers constitutive NF-κB activity and chemoresistance via autocrine stimulation [111]. At this point, PSC come into play: IL-1 leads to the expression of inducible nitric oxide synthase (iNOS) in PSC [112]. PSC do not express IL-1, but, once expressed, the ‗constitutively' active iNOS-being independent on enzyme regulators like calcium/calmodulin-releases nitric oxide (NO • ). The freely diffusible molecule NO • increases, as a paracrine mediator, the expression of IL-1 in CC, resulting in a positive feedback loop. Co-culture experiments using either PSC or PSC-conditioned medium revealed significantly reduced rates of etoposide-induced apoptosis (>50% reduction) in CC compared to CC mono-cultures. This effect was blocked by either an IL-1 receptor antagonist or the iNOS inhibitor aminoguanidine, but enhanced by the NO • donor S-nitroso-N-acetyl-D,L-penicillamine (SNAP). Additionally, histology revealed both the expression of IL-1 and iNOS in human pancreatic adenocarcinoma samples [112]. In this context, hypermethylation of caspase-3, -7, -8, and -9 genes, resulting in reduced expression of these effector enzymes of apoptotic signaling, seems to be responsible for the increased resistance to apoptosis [113]. Another research group has shown that culture of CC on ECM protein-coated dishes (including fibronectin, laminin, collagen type I and IV) directly influences CC proliferation and increases the resistance of CC against the cytotoxicity of 5-fluorouracil, cisplatin, and doxorubicin [114]. This suggests that ECM proteins, abundantly expressed by PSC, might directly promote resistance against anticancer drugs. Interestingly though, none of these ECM proteins changed gemcitabine-mediated cytotoxicity (75-400 nM; 72 hours) in any CC line investigated (Capan-1, Panc-1, MiaPaCa-2) [114], indicating that the increased resistance against anticancer drugs mediated by ECM proteins alone does not adequately reflect the situation in vivo. Finally, multicellular layer experiments, used as in vitro models for solid tumors, suggest a limited penetration of anticancer drugs through tumor tissues [115]. In case of PDA, a possible candidate gene directly involved in such an increased chemoresistance is the glycoprotein decorin, which is highly expressed on both the mRNA and protein level in activated PSC. It leads to decreased PCC proliferation, but increased gemcitabine resistance, probably due to direct binding of growth factors (affecting proliferation) [116] and small molecules like gemcitabine via it's leucine-rich domains. The net effect of decorin on proliferation (inhibition) and chemoresistance (increase) in vitro is to slow down CC growth [117]. A recent study reported major progress with respect to reducing chemotherapy resistance in PDA. Inhibition of hedgehog signaling (involved in tumor-stroma interaction as mentioned above) in a murine model of PDA, reduced stromal expansion, thereby increasing intratumoral vascular density and intratumoral concentration of gemcitabine, leading to transient stabilization of disease [118]. In summary, accumulating data indicate that PSC participate in the development of chemoresistance in PDA. Accordingly, research focusing on the improvement of anticancer drug efficiency should not exclusively study CC, but always consider the influence of stromal cells. Conclusions In recent years, it has been established that cancer growth and spread are strongly influenced by the microenvironment. However, presently the molecular signals involved in the tumor-host cross-talk have only partly been identified. Hopefully in the near future, new therapeutic options might be developed, which directly interfere with the tumor-host cross-talk. The most promising cellular target for anti-stromal treatment could be the matrix and growth factor producing PSCs and endothelial cells playing a central role in angiogenesis. In chronic pancreatitis, cancer initiation and progression might be inhibited, also through inactivation of PSC, causing a reduction of matrix remodeling. Recently, it has been shown that forced expression of peroxisome proliferator-activated receptor g (PPAR), or CCAAT/enhancer binding protein a (C/EBP-), induced a phenotypic switch from activated to quiescent PSCs in vitro, which was dependent on the expression of albumin [119]. Effects on MMP expression and net effects on tumor progression in vivo, however, await further experiments.
2014-10-01T00:00:00.000Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "73eaaba9a1f8601ffcd28510b0077c42cb4453d4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/2/3/1661/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73eaaba9a1f8601ffcd28510b0077c42cb4453d4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265290403
pes2o/s2orc
v3-fos-license
Congenital infantile hypertrophic pyloric stenosis in preterm dizygotic twins infants diagnosed early: A case report Introduction and importance The association in the occurrence of hypertrophic pyloric stenosis (HPS) is 0.25 % to 0.44 % between monozygotic twins and 0.05 % to 0.10 % in dizygotic twins. A combination of genetic and environmental factors may have contributed to the occurrence of HPS. In view of the few related cases reported recently, we present two dizygotic twins who were diagnosed with HPS. Case presentation This report describes a rare case of congenital infantile hypertrophic pyloric stenosis in preterm dizygotic twins diagnosed early, in which the first case presented with severe clinical features and managed surgically while the second presented with moderate features and hence managed non-operatively with atropine for 14 days. At 6 months of age, both twins continued to tolerate feeds, demonstrated satisfactory weight gain and had achieved appropriate developmental milestones. The postoperative course was uneventful in the twin A. Clinical discussion Congenital HPS in premature twins remains an underdiagnosed pathology due to its clinical picture mimicking digestive intolerance to feeds. The mean age at diagnosis is about 38 days, and only 0.4 % of all children suffering from HPS show symptoms in the first 3 days of life. Symptom relief is achieved after a classic pyloromyotomy is performed by a more preferable laparoscopic technique or using the open surgical technique. Conclusion If one of the dizygotic twins has HPS, the other baby should be evaluated for the same diagnosis as early as possible, to ensure timely management. HPS with moderate clinical features can be treated with atropine for 14 days while severe HPS should be treated by pyloromyotomy. Introduction Infantile hypertrophic pyloric stenosis (IHPS) is a common cause of gastrointestinal obstruction in newborns, but its exact etiology remains unclear [1].Prior to recognition of this disease as an entity by Hirschsprung in 1888 [2] and description of pyloromyotomy by Ramsteatd in 1911 [3], mortality rates exceeded 50 % [4].The association of HPS between monozygotic twins is 0.25 % to 0.44 % and in dizygotic twins it is 0.05 % to 0.10 %, making it uncommon in the population [5].IHPS has an incidence of 20.09 per 10.000 live births [6].The incidence of HPS in the Caucasian population is 5/1000 newborns compared to the African population in which it is rarer.In the United States of America, the frequency in white children is 0.13 % of the total live births [5]. The onset of symptoms is usually abrupt and dramatic, presenting with non-bilious emesis resulting from hypertrophy and hyperplasia of the pylorus, usually between the second and eighth week of life [7].A great amount of research has been conducted regarding this disease, but the exact etiology remains unknown.In 1961, the hypothesis of the multifactorial threshold model of inheritance was suggested [8].In recent years, environmental factors have been associated with IHPS.Children from a smoking mother have a higher risk of IHPS [6].Pesticides have also been reported as a potential cause of IHPS [9].A combination of genetic and environmental factors may contribute to the occurrence of HPS.In review of the few similar cases reported recently, we present two dizygotic twins who were diagnosed with HPS [5,6]. The aim of this report was to describe twins with HPS, in which the first case presented with severe clinical features and was managed surgically while the second presented with moderate clinical features and hence managed non-operatively with atropine.The work has been reported in line with the SCARE criteria [10]. Case presentation These were dizygotic premature twins born at 33 weeks of amenorrhea; hospitalized in the neonatology unit for 14 days before they were transferred to the pediatric surgery department.They had been admitted to the neonatology unit at the 15th minute of life due to respiratory distress and low birth weight. They were born to a 26-year-old primiparous mother carrying a monochorionic monoamniotic dizygotic twin pregnancy, with no family history of consanguinity or HPS and to a father aged 29 years.There was no history of fertility drug or contraceptive use.There was no history of smoking nor comorbidity.At 31 weeks, preterm rapture of membranes resulted in oligohydramnios that was confirmed by sonography and preterm labor.Except for dexamethasone and magnesium the mother received following the diagnosis of preterm labor and oligohydramnios, the pregnancy was unremarkable.Both twins were born by caesarean section with the male twin A weighing 1700 g and measuring 41 cm in length, while the female twin B, weighed 1600 g and measured 40 cm in length.The Apgar score was 7/8/9 for the twin A and 5/8/10 for twin B at one, five and ten minutes respectively.No obvious malformations were identified among the twins. The twins received immediate essential newborn care in accordance to the World Health Organization (WHO) guidelines.Respiratory distress syndrome with a Silverman score of 4/10 in twin A and 5/10 in twin B was noted, with signs of prematurity (Jean-Ballard score of 20).The diagnosis of Hyaline membrane disease and neonatal infection in prematurity was initially made in the twins.Twins A and B were on antibiotics before the diagnosis of HPS (intravenous Cefotaxime) and were fed breast milk alternating with infant formula milk using an orogastric tube.Feeding started on the second day for Twin A and the third day for Twin B. At 9 days of age, twin A developed projectile postprandial non bilious vomiting.On physical examination, weigh was 1550 g and a pyloric mass was palpable.An abdominal ultrasound done on the 10th day of life revealed stenosis measuring 5 mm in thickness and 2.1 cm in length with antral nipple sign representing protruding pyloric mucosa (Fig. 1).Both twins were transferred to the pediatric surgery department on the 13th day of life for surgical management.Twin A underwent an open extra mucosal pylorotomy according to fredet-ramstedt (Fig. 2). At 12 h postoperatively, a gastrograph meal (water-soluble nonionic gastrointestinal radiopaque agent) was done.A passage of the contrast agent was seen up to the duodenum, confirming patency.Postoperatively, feeds were initiated at 24 h post-surgery and were well tolerated.The infant was discharged on the 3rd postoperative day. Twin B presented with moderate postprandial vomiting at 12 days of life.There was a small palpable pyloric mass on physical examination and stenosis was seen on sonography.The arterial blood gas findings for twin B were: pH = 7.38, PaO 2 = 90 mmHg, PaCO2 = 35 mmHg.Twin B was initially placed under observation of clinical features and treated with atropine before each meal, with a plan for surgery if symptoms persisted or became worse.In the end, the evolution was good and he did not undergo surgery since the hypertrophy of his pylorus was small.Atropine was given for 14 days under observation, and the post prandial vomiting reduced from 7 times to 1 time a day in the 2 weeks. Follow up At 3 months of life, twins A weighed 4800 g while twin B weighed 3210 g.No complaints were reported in either twin.At 6 months, both twins continued to tolerate the feeds, demonstrated satisfactory weight gain and achieved developmental milestones.No complications were observed in both babies. Discussions The incidence of HPS is higher in non-Hispanic white males, firstborns, preterm births (<37 weeks) and infants from multiple gestations [5,6].There is, however, a variation in the incidence of HPS in the Africans, which varied from 1/5500 to 12.9/10,000 children.The reported incidence of 1/5500 live births in Tanzania was considerably lower than trends seen in other parts of the world [1]. An early onset of HPS presenting soon after birth as was seen in these twins has only scarcely been reported.Demian et al. in a large case--control study revealed that, only 6 % of infants diagnosed with HPS were <14 days old [11].These infants had a significantly higher positive family history for HPS when compared to infants who presented with HPS after 14 days of life.Besides the early presentation of HPS, also the late presentation of HPS has been reported in the literature as a rare event [12]. In our report, two dizygotic twins who developed HPS are described, which is interesting because Icnoti et al. (2022) in Mexico noted similar cases [5].HPS has also been reported in dizygotic twins simultaneously [13].The mean age at diagnosis is about 38 days, and solely 0.4 % of all children suffering from HPS show symptoms in the first 3 days of life [14].Furthermore, a decreased risk of developing HPS has been shown with increased maternal age and the number of pregnancies [4], and it is very uncommon for preterm infants to develop signs of HPS in the first week of life.Though the twins in this report were born pre term, the mother was only 26 years old and this was her first pregnancy. Even if preterm infants may not show the typical symptoms of HPS, such as projectile vomiting and metabolic alkalosis, mild symptoms such as regurgitation are reported to occur in the first days of life in 2/3 of all cases [13].HPS has also been reported to occur in triplets and dizygotic twins simultaneously [13,15], which was the case in the present report.Darlene in 2018 stated that when one twin has HPS, about 80-90 % of the time the other twin also "presents with it" [16]. It is very rare for premature infants to develop signs of HPS during the first week of life [17].In the present study, the age of onset of signs was 10 days of life.The usual clinical picture of HPS is projectile vomiting, and in expert hands the enlarged pyloric muscle can be palpated as an olive in the abdomen.[5], which was the case in our observation.Standard ultrasound criteria for measurement of pyloric muscle size in children with HPS may be valid for confirmation of the diagnosis of congenital HPS [17]. Symptom relief is achieved after a classic pyloromyotomy is performed by a more preferable laparoscopic technique or using the open surgical technique [13].A medical treatment based on atropine has been tested by other teams intravenously until the vomiting has stopped, then orally before each feed; as is the case with twin B in our report.This treatment improves digestive tolerance during the natural evolution of the pathology [18].Though twin B was treated with atropine, the resolution of symptoms was slower, which in turn was associated with prolonged hospital stay.In settings where surgeons are much fewer than recommended and anesthesia specialists for neonates not available, management with atropine may be a viable option. We treated twin A by supra-umbilical midline laparotomy of about 3 cm, we visualized a hypertrophic stenosis of the pylorus and we performed an extra mucosal pyloromyotomy in an avascular zone according to fredet-Ramstedt technique and the outcome was uneventful.A gastrographin meal was done which confirmed digestive outlet continuity on the 1st postoperative day. Conclusion Congenital HPS in premature twins remains an underdiagnosed pathology due to its clinical picture mimicking gastroesophageal reflux. If one of the dizygotic twins has HPS, the other baby should also be evaluated for it, as early as possible, to ensure timely management.HPS with moderate clinical features can be treated with atropine while severe symptoms require pyloromyotomy. Fig. 1 .Fig. 2 . Fig. 1.Abdominal ultrasound image of Twin A at 10 days of life, stenosis measuring 5 mm in thickness and 2.1 cm in length (Green arrow) with antral nipple sign (Blue arrow) representing protruding pyloric mucosa and the yellow arrow is the stomach.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
2023-11-20T16:03:28.864Z
2023-11-17T00:00:00.000
{ "year": 2023, "sha1": "1f5c5ed247d07ef422e1857b95f45b49774e6db3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijscr.2023.109069", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4347d2c98ab6c0375eb8e0ccc80657b6dccf3e28", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28962921
pes2o/s2orc
v3-fos-license
Development of autoimmune hepatitis type 1 after pulsed methylprednisolone therapy for multiple sclerosis: A case report A 43-year-old woman with multiple sclerosis (MS) was treated with pulsed methylprednisolone and interferon β at a hospital. Four weeks after initiating treatment, liver dysfunction occurred and she was referred and admitted to our hospital. Clinical and laboratory findings were consistent with and fulfilled the criteria for drug-induced hepatitis, but not for autoimmune hepatitis (AIH). She was successfully treated with corticosteroids. As ataxia developed after 1 year, she was treated with pulsed methylprednisolone for 3 d, then readmitted to our hospital when liver dysfunction occurred. Clinical and laboratory findings led to the diagnosis of AIH. To the best of our knowledge, this is the second case of AIH developed after pulsed methylprednisolone for MS. INTRODUCTION Intravenous methylprednisolone pulse therapy is the standard treatment for relapsing multiple sclerosis (MS). Interferon (IFN)-β is the most commonly used drug in the treatment of MS, and has been proven to reduce the disease activity, progression and relapse rate [1,2] . IFN-β is associated with hepatotoxicity, although it rarely induces severe liver injury. It was reported that autoimmune hepatitis (AIH) occurs during IFN-β therapy for MS [3,4] , but only one report has described the occurrence of AIH after intravenous methylprednisolone pulse therapy for MS [5] . We describe herein a case of a MS patient who developed AIH after treatment with IFN-β and pulsed methylprednisolone. CASE REPORT A 43-year-old woman with abdominal discomfort and nausea was referred to our hospital on August 7, 2006. She was diagnosed with MS on the basis of clinical and laboratory findings 7 years ago. Three years ago, she was treated with pulsed methylprednisolone (1000 mg/day for 3 d) followed by 50 mg/day of oral prednisolone because of ataxia. Although oral prednisolone was tapered and stopped for 1 month, she remained healthy until June 2006, when ataxia developed again. On June 28, 2006, she was treated with pulsed methylprednisolone (1000 mg/day for 3 d) followed by 50 mg/day of oral prednisolone. Despite pulsed methylprednisolone therapy, symptoms did not improve. She was therefore retreated with pulsed methylprednisolone (1000 mg/day) for 3 d from July 5, 2006 became nauseous and vomited, and these symptoms did not improve. On August 7, 2006, she was referred to our hospital and admitted after blood testing revealed severe liver dysfunction. Three years ago, she developed acute hepatitis due to Epstein-Barr (EB) virus after treatment with pulsed methylprednisolone. Since then, she had been free of liver dysfunction. On admission, her blood pressure was 156/89 mmHg and heart rate was 102 beats/min, body temperature was 37.3℃, and the areas of skin at sites of IFN-β injection became welts. Her conjunctivae were not jaundiced, heart and respiratory sounds were normal. No abnormalities were noted in the chest or abdomen. The liver and spleen were not palpable. Neurological examination showed no abnormalities suggestive of MS. Laboratory findings were as follows: 1102 IU/L aspartate aminotransferase (AST) (normal, 10-35 IU/L), 1067 IU/L alanine aminotransferase (ALT) (normal, 1 2 -3 3 I U / L ) , 3 7 7 I U / L a l k a l i n e p h o s p h a t a s e (ALP) (nor mal, 300-500 IU/L), 3.4 mg/dL total bilirubin (TB) (normal, < 1.1 mg/dL), 2.2 mg/dL direct bilir ubin (DB) (nor mal, 0.2-0.4 mg/dL), 26 IU/L γ-glutamyl transpeptidase (γGTP) (normal, 10-47 IU/L), 6.4 g/dL total protein (TP) (normal, 6.0-8.5 g/dL), 3.7 g/dL albumin (nor mal, 4.0-5.3 g/dL), 1370 mg/dL ser um immunoglobulin (Ig)G, 147 mg/dL IgA, 272 mg/dL IgM, and 71.4% prothrombin time (PT). Anti-nuclear antibody (ANA), anti-smooth muscle antibody and anti-LKM-1 antibody were all negative. HBs antigens, IgM-HA and HCV antibodies were negative. Other viral infections including EB virus and cytomegalovirus infection were excluded by serological testing. Abdominal computed tomography showed no abnormalities. Biopsy specimen of the liver showed bridging perivenular necrosis with infiltration of inflammatory cells including eosinophils (Figure 1). A lymphocyte-stimulation test for IFN-β yielded negative results, but the patient displayed a score of 9 according to the criteria for drug-induced liver injuries [5] , indicating a high probability of drug-induced liver injury. All these findings led to the diagnosis of drug-induced liver injury caused by IFN-β. Despite intravenous administration of stronger neo-minophagen C (60 mL/day) and prostaglandin, jaundice developed with a serum TB level of 19.1 mg/dL. Methylprednisolone (125 mg/day for 3 d) and ursodeoxycholic acid (UDCA, 600 mg/day) were therefore administered. Symptoms subsequently improved and serum TB level normalized. Prednisolone was decreased gradually and stopped on April 10, 2007. UDCA was stopped on May 10, 2007. Liver function remained normal even after withdrawal of prednisolone and UDCA. H owe ve r, a t a x i a d e ve l o p e d a n d t h e p a t i e n t was again treated with pulsed methylprednisolone (1000 mg/day) for 3 d from October 1, 2007. After pulsed methylprednisolone, oral prednisolone was not administered. Two weeks later, she was readmitted to our hospital due to fatigue and liver dysfunction. Laboratory findings on admission were as follows: 566 IU/L AST, 875 IU/L ALT, 214 IU/L ALP, 1.7 mg/dL TB, 12 IU/L γGTP, 1785 mg/dL IgG, and 71.4% PT. Anti-nuclear antibody (ANA) titer was × 80 with a homogeneous pattern, positive results were obtained for anti-smooth muscle antibody, and HLA DR was 4. Viral infections were excluded by serological testing. Biopsy specimen from the liver revealed bridging perivenular necrosis and interface hepatitis (Figure 2A and B). In this case, IgG was not elevated, which is atypical for AIH. However, according to the criteria for AIH [6] , the patient had a score of 16 on the second admission, indicating definite AIH, compared to a score of 9 on the first admission. Conversely, according to the criteria for drug-induced liver injury [7] , our patient displayed a score of 2, indicating a low possibility that this case represented dr ug-induced liver injury. Moreover, lymphocyte-stimulation testing for methylprednisolone yielded negative results. These clinical and laboratory findings supported the diagnosis of AIH. After administration of prednisolone and UDCA, symptoms and liver function improved. The charts for the overall clinical course are shown in Figure 3. Her condition is now under control with prednisolone, 10 mg/day. DISCUSSION MS is an inflammatory demyelinating disease of the central nervous system. Liver dysfunction is not always caused by MS itself, but can result from many factors, such as drug toxicity, fatty infiltration and viral infection. Liver dysfunction in patients with MS is most commonly caused by drugs. IFN-β, which raises serum ALT level as a side effect, is one of the drugs well known to cause liver injury in patients with MS. Tremlett et al [8] reported that 36.9% of patients with MS develop new elevations of ALT, although only 1.4% reach grade 3 hepatotoxicity (> 5-20 upper limit of normal). In patients with MS receiving IFN-β, if de novo elevation of aminotransferases is mild, IFN-β treatment is often continued, and elevated aminotransferases return to almost nor mal [4] . However, severe liver dysfunction does not resolve simply after stopping IFN-β, and prompt treatment is needed. A case of fulminant liver failure occurring during IFN-β treatment has been reported [9] . Our patient satisfied the criteria for drug-induced hepatitis, but not for AIH on the first admission. Byrnes et al [10] have also reported druginduced liver injury secondary to IFN-β in patients with MS. However, the precise mechanisms underlying IFN-β-induced hepatotoxicity remain unclear. IFN-β may cause autoimmune complications including thyroiditis, lupus er ythematosus and rheumatoid arthritis [11] . Duchini et al [3] have reported a case of AIH occur ring during treatment with IFN-β. Conversely, Reuβ et al [5] have reported a case of AIH that developed after high-dose intravenous methylprednisolone pulse in MS and speculated that AIH may occur in patients with multiple autoimmunity as an immune rebound phenomenon after immunosuppressive regimens. The typical histological pattern of AIH is chronic active hepatitis that shows portal inflammation with fibrosis, interface hepatitis and rosette for mation of hepatocytes. However, few cases of AIH with centrilobular necrosis (CN) as the dominant finding have been reported [12] . Recently, some cases of CN with autoimmune features have been confirmed as early-stage AIH [13,14] . Acute-onset AIH sometimes does not satisfy AIH criteria serologically and shows CN histologically [14][15][16] . Although our patient showed a typical pattern of AIH at the second admission, liver dysfunction at the first admission may have been due to early-stage AIH. The cause of AIH in this patient was an immune rebound phenomenon after pulsed methylprednisolone, because the second episode of liver dysfunction occurred after pulsed methylprednisolone therapy rather than after IFN-β therapy. In fact, some reports have described AIH occurring in patients with multiple autoimmunity after pulsed methylprednisolone therapy [5,[17][18] . In particular, withdrawal of glucocorticoids after pulsed methylprednisolone therapy might have induced immune rebound phenomenon in the present case. However, we cannot deny the possibility that AIH was induced by IFN-β in this patient. She received IFN-β treatment before the first admission. Moreover, Misdraji et al [12] reported that AIH with CN occurs after IFN-β therapy in patients with MS. In conclusion, the prevalence of AIH seems to be about 10-fold higher in patients with MS than in the general population [19] . Attention should be paid to the development of AIH after pulsed methylprednisolone or IFN-β treatment in patients with MS, and if AIH develops, immediate treatment with corticosteroids or azathioprine should be initiated. Moreover, administration of corticosteroids or azathioprine after pulsed methylprednisolone might be effective for preventing the development of AIH.
2018-04-03T01:18:38.834Z
2008-09-21T00:00:00.000
{ "year": 2008, "sha1": "6367a4d04f97f6ac00a881feeab42c9aedd89db8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.14.5474", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "91f4aa4b9f58616eb197720fc2e0e3ac375555b9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264348917
pes2o/s2orc
v3-fos-license
Polyvalent s-block elements: A missing link challenges the periodic law of chemistry for the heavy elements Significance Many chemistry textbooks assume the unreserved validity of the Periodic Law, placing elements of valence G in group column G of the Periodic Table. The physical origin is the large energy Gap between the atomic core and valence shells. Combining literature data on superheavy p-, d-, and fg-block compounds with present results on s-block polyfluorides shows that the decrease of the Gap with increasing nonrelativistic quantum numbers and relativistic spin–orbit splittings of the heavy atoms changes the chemical periodicity: i) The valences of the superheavy s/p-elements are raised/reduced, deviating from group labels G. ii) New periods no longer begin at groups 1 and 12. Both “failures” indicate the fading-out of the classic Periodic Law for elements above Z around 110. Preface The Supplementary Information (SI) contains five parts. (i) More details on the molecular quantum computational procedures applied in the present work, using commercial software, are described in section S3. (ii) Thereby obtained new results of light to super-heavy alkali and alkaline-earth (poly) fluorides of elements from groups 1 and 2 are presented in some detail, including wavefunction analyses and reaction energetics, and in particular crosschecks with different density functional and correlated wavefunction approaches and with different basis sets, all in sections S4 -S7. The optimized geometric nuclear (x,y,z) coordinates of all computed 66 molecular species (BO / PBE / SOC-ZORA / TZ2P) are collected in a separate supplementary xyz file. (iii) The variation of the atomic core-valence energy gaps at the non-relativistic level, in principle already known from literature data for decades, but insufficiently considered in general chemistry, is analyzed in detail and its relevance is stressed in section S2. (iv) Theoretical literature data at the relativistic level on the chemical valences of selected superheavy elements are reviewed, namely the late 6d elements from groups 11 to 13 (!), the 7sp elements from groups 14 to 18, and some 5g6f7d8sp elements from group 13, are reviewed in section S1.The missing molecular data on the superheavy elements of groups 1 and 2 in-between are presented in the present work The additional literature mentioned in parts (i) to (iv) is listed in section S8. S1. Short Overview of Chemical Relativistic Effects of the Heavy s-p Block Elements at the Bottom of the Periodic Table The radial extensions and energies of the partially occupied valence shells and of the outer occupied core shells of the atoms form the basis of the microscopic explanations of the vast majority of the chemical phenomena.The world behaves according to the laws of relativistic quantum mechanics.It has become customary to simulate this behavior in a two-level model: (i) the nonrelativistic approximation (which is sufficient for most of the chemistry of the lighter elements, indeed covering the larger portion of practical chemistry), and (ii) the so-called relativistic corrections, which account for the nonrelativistic errors of the chemical properties, the fractional differences increasing roughly as Z 2 and becoming qualitatively non-negligible in the lower part of the periodic system of elements.S1-S7 The central axis or backbone of chemical periodicity is the group of noble gases at the end of the sp-block of elements, and before the beginning of the next s-block.The noble gases are unique in carrying a closed outer (sp) 8 shell, above which there is a large orbital energy gap, approximately decreasing for increasing period number n at the order of ≈ n −2.5 (at the non-relativistic level of approximation, see below sect.S2).Therefore the elements after the noble gases in groups 1,2, etc. possess, above a chemically inactive (sp) 8 core shell, 1, 2, etc. chemically active valence electrons in s orbitals, which may hybridize with p or d orbitals.From period n = 4 onward, there appears another gap above an outer (spd) 18 shell in groups 12, 13, etc. with 2, 3, etc. valence electrons in s and p orbitals.Anyway, a new period begins with valences 1, 2, etc., in the heavier periods with a second sub-period with valences 2, 3, etc.In our article, we investigate molecules of heavy s-block elements. We here below review a few qualitative chemical peculiarities of the heavy elements of groups 11 to 18 (up to the s block with groups 1 and 2), and thereafter elements in extended group 3. Group 11, durable gold (Au) and fleeting roentgenium (Rg, lifetimes of min to sec): Most common oxidation states of group 11 elements are +1, for copper also +2, for gold also +3.The gold chemistry is well investigated.S8-S10 The relativistic 5d 5/2 destabilization and 6s stabilization together cause the remarkable color due to the absorption in the visible; they also cause the unusually high (+5) and low (−1) oxidation states, the pronounced sd hybridization, strong electronic correlation and dispersion effects, and the strongest metallo-(auro-)philicity.Rg is pentavalent too, the predicted heptavalency is questionable.S11,S12 Pressure may activate more d-electrons, for instance stabilizing Au +6 F 6 .S13 Group 12, liquid mercury (Hg) and 'noble' copernicium (Cn): On the same atomic physical grounds, the common valence of group 12 elements is 2. Hg forms a liquid metal under standard conditions and has an unusually strong tendency to form dimeric (Hg−Hg) 2+ units.Only under exotic conditions, Hg may be tetra-valent, while Cn, a rather noble liquid, too, is more easily oxidized to the tetravalent state.S14-S17 Group 13, thallium (Tl) and nihonium (Nh): Tl differs from its lighter homologs by its far more prominent valence state of 1.The relativistic stabilization of both the 7s½ and 7p½ levels supports the speculation of prominent +1 and −1 oxidation states for Nh.Oxidation state +3 has also been found for both elements, Tl and Nh.S18-S20 Group 14, lead (Pb) and flerovium (Fl): The dominant tetra-valence of the lighter carbon-group elements changes over tin to the dominant di-valence of Pb.The technically important relativistic enhancement of the power of the lead battery was mentioned in the main text.Fl with two closed valence shells 7s½ 2 7p½ 2 is expected to be rather noble and to form mainly low-valent compounds.S14 Yet, tetra-valency is still possible for Fl, as for Pb.Group 16, polonium (Po) and livermorium (Lv): These and the following elements, in particular those isotopes that are more easily obtainable, are extremely short-lived.The intense radioactivity results in the radiolysis of chemical bonds and radioactive self-heating, so that a chemistry of 'real substances' is hardly possible but is mainly based on tracer techniques.The 6p½ 2 6p 3 / 2 2 shells of Po give rise to oxidation states −2, 0, +2 and +4, while the 6s½ 2 shell is already rather inert due to the relativistic stabilization.Extrapolating the trends down the group, Lv is expected to show dominant oxidation state +2, with less stable +4 and −2, thereby differing more and more from the lighter homologs.S25,S26 Group 17, astatine (At) and tennessine (Ts): The isotopes of At (a-statine meaning un-stable in Greek) are even much more short-lived (τ ½  ⅓ day) than polonium, so that macroscopic specimens of its substances would immediately degrade or vaporize due to the radioactivity.Chemistry in its original meaning as the art of improving materials comes to its natural end, for the first time along with the series of elements.The first island of chemical stability then shows up for radium ( 88 Ra, which is one target of our present research) up to the later 5f-elements (say, 100 Fm, fermium).Astatine shows similarity to iodine in tracer experiments though being a bit more noble and metallic.Ts behaves differently from the lighter group-17 elements, due to the relativistic stabilization of the 7s½ 2 7p½ 2 quasi-core shells and destabilization of the 7p 3 / 2 3 valence shell.Oxidation states +1 and +3 are expected, while the common oxidation states of the lighter halogens, −1, +5 and in particular +7 are supposed to be hardly stable.S27-S30 Group 18, radon (Rn) and oganesson (Og): The maximum oxidation states of the noble gases under ambient conditions are He(0), Ne(0), Ar(0), Kr(+2), Xe(+6 or +8).Whether molecules such as XeO 4 or XeO 2 F 4 S31,S32 prove oxidation state Xe(+8) depends on whether one counts the participation of the Xe-5s 2 shell as sufficient for explicit Xe-5s bonding.For radon, only Rn(+2) is surely known so far, while higher oxidation states +4 and +6 appear possible, too.Very little is known about the chemistry of Og.S33-S39 Groups 1 and 2 are reviewed in the main text and further investigated there.Higher oxidation states than 1 and 2 had been observed (experimentally or computationally) under very high pressures such as inside the Earth (up to several Mb).These pressures are now known to activate the (n-1)p 6 noblegas shells of Cs, Ba and Ra, meta-stabilizing EF k poly-fluorides with k up to 6 (or 8): under pressure, the s-metals can behave like p-elements.S40-S42 Similarly the (n-1)d 10 shells of the heavy groups 11 and 12 elements are activated by high pressures, too, yielding unusually high oxidation states such as in the Au +5 and Hg +4 poly-fluorides mentioned above.They all appear as harbingers of the change of the core-valence gap and level structure in the natural system of elements for super-heavy elements of Z>110.S43-S45 It had been noted that spin-orbit effects overwhelm the orbital structure in the outer core and valence shells.In particular, after 8s 1/2 , the 8p 1/2 , 7d 3/2 , 6f 5/2 and 5g atomic orbital levels are near-degenerate and tend to be populated in the opposite order than according to the above-mentioned n+ℓ rule.Anyway, the early so-called 5g elements may behave somewhat like the early actinides.S48-S51 In summary, while the f-and d-and early sp-elements of period 7 from thorium ( 90 Th) to roentgenium ( 111 Rg) and copernicium ( 112 Cn) tend to higher valences than their lighter When talking about the fading away of the Periodic Law for high Z values, we must at first define what periodicity shall really mean.Historic and conceptual views help.S1,S2,S52-S55 The early 19 th century was ripe to discover ('vertical') groups of elements behaving chemically similar under common laboratory or industry conditions in our habitable crust of the Earth,.There were the groups of the alkali or alkaline-earth metals or of the halogen or chalcogen non-metals.The later 19 th century was ripe for Mendeleev and his contemporaries to discover the chemical valence or highest oxidation number as the chemically relevant number for characterizing the groups.It was discovered that the numerically ordered array of elements (at first approximately by the average atomic mass, since Moseley by the element number Z) exhibits repeated increases of the valence number: 1,2,3,… for elements in groups 1,2,3… until a steep drop before the start of the next ('horizontal') period; and of valences 2,3,4,… for elements in groups 12,13,14… . Since Bohr, the valence numbers are known to be the numbers of chemically active 'valence' electrons.Hence, a clear distinction between chemically inactive core electrons and active valence electrons is required, which is possible if there is a clear energy gap between the atomic core and valence shells.Investigations of the poly-valence in chemical compounds can reveal the chemical valence of an element under common conditions.Atomic data give hints what to search for. S2. Energy-Trends of Non-Relativistic One-Electron Atomic Orbitals The periodic properties of the elements are due to the periodically changing core and valence structures of the atoms.Neutral atoms of element number Z possess Z negative electrons -e, bound by nuclear charge +Ze.For increasing Z, large one-electron orbital energy gaps emerge above some just filled shells, namely: -above the Li + -1s 2 and the Na + -2p 6 to Fr + -6p 6 core shells, there are G = 1, 2, 3, … electrons in s, p, d and/or f valence shells of the elements from groups G = 1, 2, 3, …; -above the Zn 2+ -3d 10 to Hg 2+ -3d 10 core shells, there are 2, 3, 4, … electrons in the s,p valence shells of the elements of groups G = 10 + 2, 3, 4, …; -and above the Lu 3+ -4f 14 and Lr 3+ -5f 14 core shells, there are G = 3, 4, 5, … electrons in the d,s valence shells of the heavy elements of groups G = 3, 4, 5, … All these core shells are energetically low enough to remain chemically inert under common ambient conditions.The periodic appearance of large core-valence energy gaps above those 'inert' core shells, and G (modulo 10) chemically active electrons in the higher valence shells, explains both the chemical periodicity of the elements along a period, and the chemical similarity among the G-valent elements (with different cores) in column G, thereby establishing the complex structure of the periodic system of elements.The electronic core-valence structure of the atoms is influenced by both the shell effects at the non-relativistic level of approximation, and (for the heavier elements) by the relativistic corrections.Here we investigate the trends of a non-relativistic quantum mechanical model (non-correlated Hartree-Fock approximation), which is sufficient to understand the chemistry of the majority of the lighter elements.It helps to understand, how the relativistic effects influence the heavy element chemistry.The atomic one-electron orbital energy patterns of the group 1 atoms at the non-relativistic level of approximation are displayed in Fig. S1.The energies increase gradually going down the group, the atoms have corresponding electronic configurations.The energies of the ns valence and the (n-1)s,p outer-core orbitals in period n vary smoothly from the light to the heavy ( 119 E) elements.(Of course, the primogenic (n-1)s core shell of Li without an (n-1)p companion is special, the peculiarity of the 2 nd period already noticed by Mendeleev.) The orbital energies may be represented in hydrogen-like fashion as Ry is the Rydberg constant (Ry ≈ 13.6 eV), Z eff the effective nuclear charge and n the principal quantum number, defined by the number of orbital nodes.We here do not use the effective quantum number of quantum defect theory, but follow the suggestion of Shibuya.S59 For non-relativistic hydrogen-like one-electron atomic ions, such as Li +2 , Na +10 , K +18 etc., the electron feels the full nuclear attraction, Z eff is the total nuclear charge, Z eff = Z.The (n-1)pns gaps are obtained as The lengths of the periods n in the periodic table increase in double steps of 2([(n+2)/2] 2 ), meaning that Z increases down the group approximately proportional to ⅙ n 3 .Thus, the gap along the hydrogen-like series of ions would diverge as +O(n 3 ). For neutral many-electron atoms, however, there acts the nuclear screening by the inner-shell electrons, increasing as O(Z).The outer electrons feel only a shielded nuclear attraction, corresponding to an effective Z eff < Z. Table S1 lists the atomic orbital energies of the (n-1)p and ns shells for group 1 atoms, calculated at the level of the nonrelativistic orbital approximation (Hartree-Fock).The electron in the ns orbital 'feels', at large distances, the single charge of the ionic A + core, but when it penetrates into the core, it feels a stronger attraction.On average, Z eff > 1.An electron in the (n-1)p shell is inner more and 'feels' a larger Z eff than in the ns shell.Educated guesses for Z eff can be obtained by Slater's or by Shibuya's (improved) screening rule.S59,S60 Due to the period doubling (i.e.lengths 8,8, 18,18, 32,32, …), the Z eff values in the odd periods are somewhat larger, and in the even periods somewhat lower, in particular for small n and Z, see Fig. S2a.For large n and very large Z, the Z eff may converge to a respective constant.for Z up to 120 semi-quantitatively, as shown in Fig. S2a (n is the period number in the periodic table ): ~ S8 ~ Inserting these approximations for Z eff , the ns and (n-1)p orbital energies are expressed as When it comes to the very heavy elements, the increase of nuclear charge, and of orbital screening, partially cancel each other, leading to a finite gap, in contrast to the hydrogen-like ions.Our fit of the nonrelativistic HF results indicates that the core-valence energy gap between (n-1)p and ns converges to a small constant limit for large n, Δε(high n) → 7 to 11 eV (from Figs. 2a and 2b, respectively).Already at the non-relativistic approximation, the gap becomes too small for "inert noble gas shells" so that the periodicity fades away.In the more real relativistic case, the order of (n-1)p 3/2 and ns even inverts (Δε < 0), according to Fricke's and Pyykkö's investigation of high-Z atoms around Z = 168.S6,S53 Here, it is appropriate stressing one point.The periodicity of the chemical elements is determined by the appearance of sufficiently large orbital energy gaps, which differentiate between the chemically inactive core electrons and the active valence electrons.Remarkably, in chemical education and philosophy, only the energetic n,ℓ order of the orbitals and not the more relevant size of the gaps is considered.It is assumed that the energetic order of the atomic one-electron levels is determined by the so-called "general (n+ℓ,n) Aufbau principle".It is taught in the vast majority of chemistry textbooks, and forms the basis of the research into a general symmetry principle governing all element chemistry, S61-S63 although the (n+ℓ,n) orbital energy rule holds for the group 2 atoms only.S1,S64,S65 For the majority of elements, for which the non-relativistic simulations are sufficient, the inner core shells of atoms in period n are occupied in a hydrogen-like manner as However, the outer core shells of an atom with valence shell n(sp) G or (nsp, (n-1)d) G etc. (G = group number = number of valence electrons in period n) are occupied in an 'inverted' manner as The reason, as mentioned before, is that electrons in orbitals with angular momentum quantum number ℓ and centrifugal force ~ℓ(ℓ+1)/r 3 are centrifugally pulled away from the nuclear center and are the better screened, the larger the ℓ value is.While the np levels are energetically only slightly above the ns levels, always, and form a jointly occupied n(sp) 8 shell, from the group 18 (or 0) noble-gas atoms onward, the nd and nf levels are up to ca. 3 / 2 and 3 quantum numbers higher.This "Δn eff " gap is numerically obtained from actual observations or computations of many individual cases or derivations from the Thomas-Fermi model with orbitals in an approximate Tietz potential.S66-S69 This is the physical reason for the period doubling. In conclusion, in hydrogen-like atoms, the energy gap between the (n-1)p and ns orbitals is divergent for increasing Z and n.When the electronic shielding effect in the neutral many-electron atoms is taken into account, the outer electrons are prevented to feel the full nuclear charge.The core-valence energy gap is thus convergent to a small limiting value, in the approximate non-relativistic model.When it comes to the more realistic model with relativistic behavior, the order of (n-1)p and ns orbital energies becomes even inverted when n is large enough. S3.1. Quantum-Chemical Computational Methods The theoretical studies were carried out using quantum-chemical quasi-relativistic densityfunctional (DFT) and ab-initio wave-function (WFT) approaches for various molecular species of type (A − ,Ae)F k (k = 0 -9 ; A = alkali metal Li, Na, K, Rb, Cs, Fr, 119 E ; Ae = alkaline earth metal Ba, Ra, 120 E).DFT calculations were performed, using the Amsterdam Density Functional (ADF) Program at the all-electron level.S70 The PBE (and for comparison also the PBE0, B3LYP and HSE06) energy density functionals were applied.S71-S75 Slater-type orbital (STO) basis sets of triple-zeta plus two polarization functions quality (TZ2P; double zeta for the core shells; diffuse s,p functions for the valence shell) were used.S76,S77 The scalar relativistic (SR) and spin-orbit coupling (SOC) effects were taken into account by the zero-order regular approximation (ZORA).S78-S80 The spherical Gaussian nuclear charge distribution model was used.S81 The optimized geometric structures and the harmonic vibrational frequencies were determined at the SR-ZORA and SOC-ZORA levels. Concerning the latter, both the unrestricted non-collinear and the restricted spin-orbit approximations were considered; S82,S83 the 119,120 E species with largest SOC effects show negligible differences of the average E-F bond lengths.Therefore, all structures were determined assuming Kramers restricted collinear spinor-orbital pairs. We also performed ab-initio WFT calculations for the EF k species applying the advanced electron correlation methods in the MOLPRO 2018.2 program.S84,S85 Single-reference CCSD(T) calculations (coupled-cluster with single and double and perturbative triple excitations) S86,S87 at the SR level of approximation were performed for the geometric structures optimized at the PBE/SOC level.The K-1s 2 −2p 6 , Rb-1s 2 −3d 10 , Cs-1s 2 −4d 10 , Fr-1s 2 −5d 10 , Ra-1s 2 −5d 10 , 119 E-1s 2 −5f 14 and 120 E-1s 2 −5f 14 core-shells were replaced by Stuttgart-Cologne energy-consistent relativistic SOC pseudo-potentials (ECP10MDF for K, ECP28MDF for Rb, ECP46MDF for Cs, ECP78MDF for Fr, ECP78MDF for Ra, ECP92MDFQ for 119 E and ECP92MDFQ for 120 E).Adapted double-to-triple-valence polarized basis sets (TZVP) were applied for most elements.S88-S90 For Li, Na and F, the aug-cc-pVTZ basis sets were used.S91 For the anionic species 119 E − F k=0-9 an additional diffuse s basis function with exponent 0.0036 was added to overcome the restriction due to the RECP basis contraction.All non-core shells were correlated in the CCSD(T) calculations. SOC corrections to the SR-CCSD(T) energies of the geometric ground and higher-energy isomers were obtained with the help of CASSCF/CASPT2/SO-SI calculations (small self-consistent multi-configuration spin-orbit coupled state interaction, with the diagonal elements of the CI matrix replaced by the CASPT2 state energies).S92 The SOC matrix elements were calculated with the help of the SOC Coulomb-Breit pseudo-potentials.The SOC was considered for the E-(n-1)p outer-core shell in the molecules 119 EF k − (k = 1 -9), AF 6 − (A = Cs, Fr, 119 E), and AeF 6 (Ae = Ba, Ra, 120 E) by choosing 4 or 5 appropriate MOs for the active space. S3.2. Methods Used for the Figures in the Main Text Fig. 1: One-electron SCF orbital energies ε of the (n-1)s,p outer-core shells and of the ns valence shell of group I atoms were calculated by the spin-unrestricted non-collinear Hartree-Fock method using the ADF program.STO basis sets of quadruple-zeta plus four polarization functions quality (QZ4P) were employed.The SR and SOC effects were taken into account by the exact transformation of the 4-component Dirac equation to 2-components (X2C).Our results agree well with the Dirac-Fock orbital energies of Desclaux, in particular for the light atoms.The experimental values of initial and final configuration-averaged energy differences ε exp of one-electron ionization processes derived from the 'NIST Atomic Spectra Database'are a bit larger, in particular for the heavy atoms.S70,S76,S93-S95 Fig. 2: The geometric structures of 119 EF k -molecules were determined by applying the quasi-relativistic spin-orbit coupled PBE density-functional approximation, using Slater-type orbital basis sets of TZ2P quality.S71,S76-S80 The final energies were then determined by ab-initio many-electron correlated CCSD(T) calculations, with SOC corrections obtained by the spin-orbit coupled state-interaction CASSCF/CASPT2 approach, see also sect.S2.1 above.We stress that the neglect of spin-orbit coupling would yield qualitatively incorrect trends, see p. S15 below.Mulliken populations were obtained from the one-particle density matrix of SOC PBE density-functional calculations.S86,S87,S92 ~ S13 ~ S3.3. Comments on Breit and QED effects In the relativistic regime, the electrostatic Coulomb interaction needs to be corrected by the Breit correction due to inter-electronic magnetic and retarded interactions.In the strong fields near the nucleus, the high velocities and accelerations of the electrons are connected with quantum electro-dynamical (QED) vacuum polarization and fluctuation effects. The Breit corrections in the atomic valence shells are empirically accounted for automatically by the Cologne effective core pseudopotentials S88 that were applied in the correlated ab-initio calculations.The all-electron density functional calculations without Breit correction do not show respective typical difference trends, indicating that the Breit corrections do not dominate the multi-origin accuracy-noise.Further, the Breit corrections of Aucar et al.S30 , extrapolated to the 120 E atom, are expected significantly smaller than an eV. QED effects of the tails of the valence shells near the nucleus are small and even smaller than the Breit corrections.S96-S98 In summary, we may assume that the Breit and QED corrections are qualitatively negligible in comparison to the present large complex stabilization energies of the order of many eV. ) semicore shell through 119 E(7p 3/2 )-F(2p σ ) overlap, leading to stronger and shorter bonds for increasing coordination number, which is another rather unusual phenomenon.When more than six fluorine atoms are added to 119 E − , the compact low-energy 7p 1/2 orbital becomes involved.The average bond distance still decreases though less strongly, while the overall bonding energy no longer increases, but becomes reduced for k > 6 (see Figs. 4a and S5). The non-relativistic approximation (NR) is obviously unreliable for the heavy 119 E element.Both the scalar and spin-orbit relativistic effects must be accounted for.Spin-orbit coupling (SOC) destabilizes the E-7p 3 / 2 orbital, what strengthens and contracts the 119 E-F bonds for k ≤ 6, but for k > 6 the E-7p 1 / 2 orbital, which becomes involved too, is stabilized and contracted and does less well overlap with F-2p.Thereby the bonds are weakened and expanded.We added an extra diffuse s basis function S88 and compare the results of SOC calculations for the stepwise stabilization by fluorination of 119 E − without and with the diffuse s basis function in Table S2.Diffuse functions are important for the naked atomic 119 E 1− (8s 2 ) anion, stabilizing it in the present case by nearly ½ eV.The di-and tri-atomic 119 EF − molecular anions with ionic E(8s) bonding and the negative charge distributed over the fluoride species, are less stabilized by ca.0.2 eV per F ligand.Diffuse functions have no significant effect anymore, when the negative charge is distributed over more F atoms bonded by the E(7p 3/2 ) orbitals.In summary, diffuse basis functions are needed for species with significant 119 E(8s) population.We have crosschecked our results on the reaction energies of 119 E − + k / 2 F 2 → 119 EF k − by performing calculations using different density functionals: PBE, B3LYP, PBE0, HSE0 (and comparing with CAS-SCF-SI/SO), see Fig. S6.We see rather satisfactory agreement for the stable poly-fluoride complexes up to k = 6, indicating a 7p 3/2 8s 1/2 valence shell for super-heavy s-block element 119 E (similarly for 120 E) with the highest oxidation state +5 (and +6, respectively).However, for k > 6, corresponding to the chemical activation of the 7s 1/2 7p 1/2 core shells by even higher enforced oxidation, PBE appears to show over-binding as compared to CAS-SCF-SI/SO, with PBE0, B3LYP and HSE06 in-between.All calculations indicate only meta-stability for any higher oxidation than +5 and +6 for 119 E and 120 E, respectively, due to the large 7p 1/2 -7p 3/2 energy and radial splitting. The F-F bond energies (appearing in the expression for ΔE) are compared in Table S3 for several computational approaches.For PBE, the error per half-bond is +0.3 eV, for PBE0 and B3LYP is ca.−0.24 eV, and for HSE06 −0.12 eV.Apparently, fortuitous DFT error cancellation occurs when E-F and F-F bond energies are compared in the key fluorination reaction.Ab-initio CC-SD(T) and CAS-SCF approaches with triple-zeta-polarized basis sets appear reliable; compare also Fig. S5 above. Table S3.Bond dissociation energy of F-F (BDE in eV), obtained with different approaches. S5. Poly-Fluoride Molecules of Alkali and Alkaline-Earth Metals The lighter (alkali and) alkaline-earth metal difluoride molecules S99-S101 are known, and were here recalculated, to be linear.However, beginning with periods 6 and 4, respectively, the dihalide molecules are quasi-linear (i.e. with rather flat potential surfaces and large bending amplitudes: CsF 2 − , FrF 2 − , 119 EF 2 − ), or statically bent (CaF 2 to 120 EF 2 ), see Table S4.Of course, for the quasi-linear species, the harmonic approximation cannot yield physically reasonable bending vibrations (see Table S5).If the energy gap E between the deformed and symmetric structures is significantly larger than 1 k B T (≈ 0.026 eV ≈ 0.6 kcal/mol ≈ 2.5 kJ/mol at T = 298.15K), the molecules are statically deformed.For instance, linearized RaF 2 and 120 EF 2 have energies raised by 0.16 and 0.29 eV, respectively, and are represented as bent molecules. Otherwise, there is strong rotation-vibration coupling, large anharmonicity and large-amplitude vibration with an overall symmetric structure. Table S4.Bending angles (in o ) of s-block di-halides [X-E-X] −,0 .For the quasi-linear species, the calculated bending angle minima are given in parentheses.Usually, the force constant of the antisymmetric bond stretching vibration, F-E-F, is a little smaller than of the symmetric stretching, F−E−F.However, for a light central atom, the effective reduced mass is smaller for the antisymmetric stretch.Therefore, one is used to expect higher antisymmetric stretching frequencies, which is the case for species such as MgI 2 .However, for a very heavy central atom, the mass effect is small, and the antisymmetric vibrations have smaller wave-numbers as shown in Table S5. Molecule Table S5.s-block poly-fluorides EF 2 q , EF 6 q and E(F 3 ) 2 q (lower symmetric ground state, and higher-symmetric transition state).Bond angles α(in o ); (mean) bond lengths R(E-F) (in pm); energy differences ΔE (in eV ≈ 96.5 kJ/mol) between the low-and high-symmetric stationary points on the potential surface (SR-CCSD(T), with CASPT2/SO-SI correction for species with heavy E from period 6 onward); vibrational frequencies ν (in cm −1 ; harmonic ZORA-PBE-DFT approximation, with SOC for species with heavy E from period 6 onward; values at the high-symmetric transition state in parentheses). Molecule Central Atom E *The more reliable CCSD(T) calculation reveals that the deformed C 2v -Li + (F 1/3-) 6 structure is more stable by 2.55 eV as shown in Fig 3a where the F-F interaction is better described. ** For the sake of simplicity, we only display the data of species containing Rb. *** The O h -RaF 6 species with high oxidation state (OS) Ra 6+ and the D 2d -Ra(F 3 ) 2 species with common OS Ra 2+ are similar in energy.Our DFT calculation yields RaF 6 a bit more stable, while our WFT calculation yields Ra(F 3 ) 2 more stable by 0.69 eV.indicate that the isomer is energetically more (less) stable w.r.t. to F 2 release.For a given isomeric species, the E-F bond length increases steadily from the lighter to the heavier central metal ions (also from Cs to Fr to 119 E, and from Ba to Ra to 120 E).However, the oxidation state of heavy central group-1 atoms may increase from +1 or +2 by 4 units to +5 or +6, respectively; this corresponds to oxidizing off of the p 3/2 4 semi-core shell, and to the rearrangement of E(F 3 ) 2 to E(F) 6 .Then the E-F distances decrease by more than half an Å, due to the E-(n-1)p−F-2p covalent overlap interactions. The ns orbitals are rather diffuse and contribute to weak covalences.The hydride molecules may be interpreted as immersing a proton into the A − (ns 2 ) shell.The covalent A-H bond lengths follow the A-ns orbital radii trend, both with maximum for Cs.S103-S109 However, concerning the majority of more ionic alkali compounds including the A-F k molecules, the bond lengths are mainly determined by the closed-shell closed-shell Pauli repulsions.Therefore, the ionic bond lengths, and the effective ionic radii of the alkali cations A + , increase smoothly with increasing Z up to 119 E. S110 ~ S24 ~ This For the super-heavy alkali metal, 119 E-7p 3/2 and F-2p are near-degenerate and mix, contributing covalent bond energy. Spin-orbit coupling (SOC), which does not appear in the common orbital picture, plays an important role in the chemical bonding of heavy elements.Namely, SOC in general stabilizes the atomic valence shells with spherical symmetry more than the molecular valence shells with lower geometric symmetry.Therefore, SOC usually attenuates the bonding.However, the upper part of the atomic core shells is destabilized by SOC (Fig. 1), so that the outer core shell eventually becomes valence-active.Thereby SOC promotes the orbital mixing and enhances the 'core bonding'.At the nonrelativistic level, the 7s and 7p 'noble gas' core orbitals have very similar radial extensions (Fig. S13).At the scalar relativistic level, the 7s core and 8s valence orbitals are both energetically stabilized and radially contracted, which improves the bonding power of 8s.Spin-orbit mixing stabilizes and contracts the 7p 1/2 so that now 7s 1/2 and 7p 1/2 have similar radial extensions at low energy, forming a sound joint atomic core shell.The two energetically higher 7p 3/2 spinors at higher energy protrude radially from the core shell into the valence region and act as valence orbitals with covalent bonding power.) 2 ] − and O h -[A 5+ (F − ) 6 ] − species (A = alkali metal; bold entries for the more stable isomer), calculated at the PBE/ZORA level (with SOC for species with heavy Cs, Fr, 119 E) using TZ2P basis sets.Q A + (ℓ) means the ℓ-orbital population relative to A + -p 6 s 0 .Q A-eff and Q F-eff mean effective charges on the A and F atoms.OP is the covalent A-F Overlap-Population.* The F atoms of the formal F 3 −1 unit have very similar overlap populations with the alkali cation, while the negative charge is mainly localized on the terminal F atoms; the averaged charge per F is given. Property Table S7 displays the atomic orbital populations of two types of anionic [AF 6 ] − complexes: D 2d -[A(F 3 ) 2 ] − (ground states for A = K, Rb, Cs), and O h -[A(F) 6 ] − (ground states for A = Fr, 119 E).The D 2d species have weakly deformed A + -(n-1)p 6 cations with little s,d,f admixtures, being embraced by two bent (F −0.4 F −0.15 F −0.4 ) anions, bonded mainly ionically.The covalent A-(n-1)p,ns / F-2p overlap populations are very small.It makes hardly any difference whether the D 2d species is the stable isomer or a metastable transition state (as for Fr and 119 E). Breaking up the D 2d -[A(F 3 ) 2 ] − trifluoride anions and forming O h -[A(F) 6 ] − complex species costs energy up to several hundred kJ/mol for the lighter A, with little change of the orbital populations, i.e. the central A ion remains basically in the A + state.For border-case Cs, the 5p core of higher-energy O h -[CsF 6 ] − transition state becomes already perturbed.The real change happens for Fr and 119 E, with a jump by a factor of ca. 2 concerning the effective charge on F, and the A-(n-1)p / F-2p overlap population.The extraordinary O h ground-state structure of these two species may be formally written as [Fr 5+ F − 6 ] − and [ 119 E 5+ F − 6 ] − .A simple GGA functional like PBE sometimes suffers from the overestimation of charge transfer.The B3LYP and PBE0 hybrid and HSE06 range-adapted hybrid functionals, however, show only little differences.The effective charge definition according to Mulliken is known to become unreliable for extended basis sets.However, the trend in the [ 119 EF k ] − complexes for increasing k is quite smooth and qualitatively similar to other charge definitions according to Bader, Hirshfeld or Voronoi.Overall, the effective charge-changes on the 119 E atom upon formal chemical oxidation by fluorination is largest for each step at the weakly bound outer 8s shell (k from 0 to 2), a little smaller at the stronger bound 7p 3/2 valence shell (k from 2 to 6), and even slightly smaller for enforced oxidation at the core-like 7p 1/2 and 7s 1/2 shells. S7. Enthalpy and Gibbs-Energy Effects of Fluoride Formation The reactions E + k / 2 F 2 → EF k generate different amounts of molar reaction energies ΔE o (at 0 K).In the gas phase, the total volume is reduced, contributing p×ΔV = −½ RT at each step to the enthalpy (stabilization by raising the pressure), but also the thermal energies of the e-and pro-ducts vary.Translation and rotation are converted into vibration, reducing in particular the entropy, which decreases the value of the −T×ΔS contribution to the Gibbs free enthalpy.We estimated the magnitude of these terms to check whether they will have some influence on the reaction equilibria under ambient conditions.A well-known, related case to compare with is E = S, the formation of the sulfur fluorides.S111 We find that ΔH is weakly raised by ca.+0.07 eV (mainly due to rotations and vibrations of the product), but ΔG is raised by ca.+0.5 eV, see Table S9a.The results for the 119 E -+ k/2 F 2 → 119 EF k -reactions are displayed in Table S9b and Fig. S16.Apparently, the superheavy fluorides 119 EF 6 − and 120 EF 6 0 remain stable under normal conditions, while the border cases FrF 6 − and RaF 6 0 become slightly destabilized at STP.The 'oscillatory' variation in Fig. S16, also well-known for the sulfur case, is connected with the vertical 'secondary periodicity' in the periodic table. Comment: The contribution of thermal internal energy and volume changes to the reaction equilibria is small, while the entropy favors unbound difluorine molecules.Compared with the zero-temperature reaction energy E o , this is not decisive for the stability of [ 119 E F 6 ] − , however, it further destabilizes the slightly unstable RaF 6 at STP.To stabilize Ra 6+ at STP, a very specific hexa-dentate ligand would be required. Fig. S1 . Fig. S1.One-electron orbital energies ε of group 1 atoms at the non-relativistic Hartree-Fock approximation.A Slater-type orbital basis of QZ4P quality was employed.The change of background color indicates the energy range where orbitals typically change over from inert core-shell behavior to valence activity under common chemical conditions. Fig. S2a . Fig.S2a.Effective nuclear charges 'felt' by ns valence (black squares) and (n-1)p outer-ore (red dots) electrons of group 1 atoms.A single linear fit from Na to 119 E of odd and even periods n is already sufficient to reproduce the computed data of TableS1semi-quantitatively.(H and Li belong to the peculiar light elements; the deviating red dot is for Li-1s, replacing non-existent 'Li-1p'.) Fig. S2b.(n−1)p-core/ns-valence orbital energy gaps Δε (black dots with eye guiding dash line; the Li dot is for 1s/2s) of the alkali metal atoms, at the non-relativistic level of approximation (Hartree-Fock) vs.the period numbers n = 2 to 8. Fig. S3 . Fig. S3.Geometric ground-state structures (ball-and-stick models) of anionic poly-fluoro complex molecules of Eka-Fr [ 119 EF k ] − , k = 1 to 9 (F in green, central alkali metal in blue), from the quasi-relativistic spin-orbit coupled PBE density-functional approximation.The spin-state (closed shell singlet; or open shell doublet due to a singly occupied molecular spinor) is specified by a pre-superscript of the symbol for the geometric symmetry group.The values of the 119 E-F distances are given in parentheses (in pm; averaged in lower symmetry cases).In the lowest line, three complexes are displayed, showing space filling atomic calottes (the outer radii are F: 1.5 Å, Na: 1.5 Å, Rb: 2.0 Å, 119 E: 2.5 Å). Figure S7 . Figure S7.Reaction energies ΔE (in eV) for successive fluorination, 119 E(8s 2 ) − + k / 2 F 2 → [ 119 EF k ] − , for k = 0 -9 using the SOC-ZORA PBE functional.The calculations were performed with varying sizes of the Slater-type orbital basis sets, ranging from DZ to QZ4P.We have evaluated the energies of the key reactions 119 E + k/2 F 2 → [ 119 E F 6 ]  (k = 0 to 9) with the systematic improvement of the basis set from DZ through DZP, TZP and TZ2P to good quality QZ4P, applying the same settings as for Fig.S6(using the PBE functional).The changes are remarkably small.Only the low-quality DZ basis causes notable errors: the stabilities of the covalent 119 E(7p 3/2 )-F(2p) bonds (k = 3-6) are overestimated by nearly 10 %; and the destabilization by the 119 E(7sp 1/2 )-F(2p) interactions (k = 7-9) are significantly underestimated at the low-quality DZ level.The pronounced penta-(and hexa-)valence of super-heavy alkali metal 119 E (and alkaline earth metal 120 E) under common conditions is safely confirmed.The repeated statement that 119 E behaves similar to Rb may only hold for values for BaF 2 : ca. 100 o , 118 o , 120 o .S102 Fig. S9 . Fig. S9.E−F bond distances of various molecular species (in Å; E means A = alkali metal, or Ae = alkaline-earth metal; from spin-orbit coupled ZORA PBE density-functional calculations).Black dots (with black connecting lines): [E + F − ].Blue dots: [A + (F 3 − ) 2 ] − and [Ae 2+ (F 3 − ) 2 ].Lilac dots: [A + (F ⅓− ) 6 ] − .Red squares: [A 5+ (F − ) 6 ] − and [Ae 6+ (F − ) 6 ].Solid (open) markersindicate that the isomer is energetically more (less) stable w.r.t. to F 2 release.For a given isomeric species, the E-F bond length increases steadily from the lighter to the heavier central metal ions (also from Cs to Fr to 119 E, and from Ba to Ra to 120 E).However, the oxidation state of heavy central group-1 atoms may increase from +1 or +2 by 4 units to +5 or +6, respectively; this corresponds to oxidizing off of the p 3/2 4 semi-core shell, and to the rearrangement of E(F 3 ) 2 to E(F) 6 .Then the E-F distances decrease by more than half an Å, due to the E-(n-1)p−F-2p covalent overlap interactions. 119 E-7p/F-2p orbital mixing causes some overall covalent stabilization, and the respective configuration mixing contributes to the correlation energy, as noted byMiranda et al. already in 2012.S110 Fig. S11 . Fig. S11.Orbital energy level diagram, on the left for K, K-F, F (negligible SO coupling); on the right for F, 119 E-F, 119 E (SO coupled spinors); calculated at the PBE/SO level.The %-ages indicate the dominant atomic orbital contributions (>10%).The K-3p 6 core shell (also the 119 E-7p 1/2 2 component) lies significantly below the F-2p valence shell (strongly upshifted in the molecule due to K→F charge transfer and K-core/F − -2p 6 Pauli repulsion).For the super-heavy alkali metal, 119 E-7p 3/2 and F-2p are near-degenerate and mix, contributing covalent bond energy. Fig. S12 . Fig. S12.(a) Orbital energy level diagram for 119 E 0 , 6 F 0 , and O h -[ 119 EF 6 ] − (in a counter-ionic potential of -5.5 eV), at scalar-relativistic (SR) and spin-orbit (SO) coupled approximations (obtained by the PBE density functional approximation).The shaded molecular band of F-2p type contains (at the SR level) a 1g (with ca.10% 119 E-8s admixture), e g , t 1g , t 2g , and t 2u levels.The two electrons on the highest partially-occupied spatially triply-degenerate t 1u level would cause a geometric Jahn-Teller distortion of the molecular open shell.Spin-orbit coupling stabilizes the molecule (and the free atom, too) yielding a closed-shell molecular ground state with a significant HOMO-LUMO gap of 4.3 eV, without geometric Jahn-Teller symmetry breaking of the O h structure.(b) Corresponding orbital energy level diagram for 119 E 0 , 8 F 0 , and D 4d -[ 119 EF 8 ] − (details as above, including the low 119 E-8s admixture).The closed-shell HOMO-LUMO gap at the SR approximation is reduced by spin-orbit coupling to 1 eV, possibly causing a secondary Jahn-Teller distortion of the anyway less stable octa-fluoride. Fig. S14 . Fig. S14.Effective Atomic Mulliken Charges and Orbital Populations of [ 119 E F k ] 1− complexes.119 E is Eka-Fr, k goes from 0 to 9, i.e. from formal alkali anions E(7p 6 8s 2 ) − to E(7p½ 2 )F 9 − complexes.The increasing number of fluorine ligands attracts from 119 E at first the two E(8s 2 ) electrons (see the red dots, and the eye-guiding red line) yielding ionic E-F bonds, and then the four E(7p 3 / 2 4 ) electrons (blue dots) yielding covalences.The smaller slopes of the red (for k > 2) and blue lines (for k > 6) indicate that the energetically lower E(7s½ 2 7p½ 2 ) electrons are harder to oxidize off.The increasing E-d,f populations (green dots) indicate the increasing deformation of the otherwise spherical E +(k−1) alkali ion.The effective charge on the F ligands (grey triangles) is nearly −1 and decreases a bit in value for increasing fluorination, due to the competition for electrons by the E(7p) and F(2p) orbitals. Fig. S16 . Fig. S16.Variation of Enthalpy H STP (left) and Gibbs Enthalpy G STP (right) with respect to the zero reference E 0 (values in eV; 0 at 0 K; STP at standard temperature and pressure) of reaction 119 E − + k/2 F 2 → [ 119 E F k ] − , for k = 0 to 9. Group 15, bismuth (Bi) and moscovium (Mc): Bismuth is the heaviest and the last element that is practically stable, with a half-life of more than 10 9 cosmic ages.It fits into the general vertical and horizontal trends found for the elements of groups 14 and 15.The relativistic stabilization of Mc(7s½ 2 7p½ 2 ) + may further stabilize the mono-valency of Mc in ionic halides Mc + X − , while tri-valency appears in the hydride McH 3 . S23,S24+5 oxidation state for Mc has so far not been established. homologs, lower valences are preferred by 113 Nh and 114Fl, and 115 Mc, 116 Lv, 117Ts and 118 Og cannot even reach the higher valence states 5 to 8 in groups 15 to 18, respectively.The heaviest elements differ more and more from their lighter homologs.118Og is not at all behaving as a noble gas like krypton or radon, with HOS (highest oxidation state) = 4 in contrast to HOS =8 for xenon.Further Oganesson has an abnormally small 7p 3 / 2 −8s½ energy gap.The vertical and horizontal trends of properties and reactivities of the chemical elements over the Periodic Table are diverse: more or less smooth or wavy.Typical deviations from simplistic rules happen at the top of the Periodic Table, without calling it a violation of the Periodic Law. Table S1 . Energy ε and effective nuclear charge Z eff (eq. 1) of atomic one-electron states of group 1 atoms at the non-relativistic Hartree-Fock level. Table S6 . EF 6 1−,0 Isomers (E = Na, K, Rb, Cs, Fr, 119 E and Ra, 120 E).Optimized Structures and Relative Energies at the PBE/ZORA level with SOC for species with heavy E from period 6 onward.
2023-10-21T06:18:17.085Z
2023-10-19T00:00:00.000
{ "year": 2023, "sha1": "de89999d28314a4ef340f45dc0924b938ede0ced", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1073/pnas.2303989120", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2836b46bdc24d47452d342a81ef326fbebbc8a39", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
59320030
pes2o/s2orc
v3-fos-license
Antiproliferative activity of chloroformic fractions from leaves and inflorescences of Ageratina gracilis doi: 10.9755/ejfa.2016-09-1164 INTRODUCTION The use of natural products to suppress or prevent cancer progression has been a strategy used around the world particularly in developed countries with a wide biodiversity (Newman et al., 2016).Colombia is one of the countries with the highest diversity of plant species that can be used in treating diseases (Lizcano et al., 2014).Ageratina gracilis (Kunth) R.M. King & H. Rob, is a specie that belongs to the family Asteraceae, it was formerly part of the Eupatorium genus but then classified within the genus Ageratina.It has different synonyms including Eupatorium caducisetum DC, Eupatorium epilobioides Kunth and Eupatorium gracile Kunth (Roskov et al., 2016).A. gracilis has been localized between 2000 and 3000 meters above sea level on Colombian moors (Garcia 1975), but factors such as, climate change, urbanization, among others have endangered the specie.A. gracilis is a herbaceous plant, about 20 cm high; erect stems and rounded at the bottom branches; green and alternate leaves on mature plants and whorls on the younger ones; by last leaflets have a length around 10 and 15 mm.Corymb terminal inflorescences, white headed and heterogamous (Garcia., 1975).Previously, some flavonoids were isolated from flowers of A. gracile such as 3,5,4'-trihidroxi-7,8-dimetoxiflavone, 3,5,6,3'4'-pentahidroxi-6 metoxiflavone, 3,5,7,4'-tetrahidroxi,-8-metoxiflavone, 3,5,6,7,3',4'-hexahidroxiflavone and 3,5,7,8,4'-pentahidroxiflavona (Torrenegra et al., 1984) compounds that showed antioxidant activity. Others species of Ageratina genus have been studied for anticancer activity, these are A. adenophora that showed inhibition of the cell proliferation against HCT-8 (colorectal adenocarcinoma), Bel-7402 (hepatocellular carcinoma), and A2780 (ovarian carcinoma) (He et al., 2008) A. pichinchensis presented in vitro cytotoxicity against some cancer cell lines as KB (nasopharyngeal To obtain a scientific basis and justification of plant domestication in the use of Ageratina gracilis, we did an in vitro study of the anticancer potential of extracts and fractions from its leaves and inflorescences.Firstly, cytotoxicity was evaluated against five human tumorigenic cell lines by MTT assay.Subsequently, the chloroformic fractions, considered the most cytotoxic were tested for genotoxicity by comet assay, morphological effects were analyzed by fluorescent microscopy, cell cycle arrest by flow cytometry and early apoptosis induction through fluorescein-5-isothiocyanate (FITC) labeled Annexin-V assay.Non-polar extracts with IC 50 values of <53µg/ml showed a high cytotoxicity.The highest cytotoxicity was achieved by chloroformic fraction from petroleum ether extract of leaves and inflorescences and chloroformic fraction from ethanolic extract of leaves, displaying a significant inhibition of cell viability particularly on A549 cells with an IC 50 value of 25.9 µg/mL.Chloroformic fractions caused a high percent of DNA damage above 60 percent on A549 and MDAMB-231.The fractions also induced G1/S phase arrest of the cell cycle in A549 cells, furthermore it was confirmed the apoptotic activity chloroformic fraction from petroleum ether extract of inflorescences and chloroformic fraction from ethanolic extract of leaves on those cells by Annexin-V assay.These preliminary results indicate that A. gracilis has an antiproliferative activity against cancer cells, being a starting point for forthcoming studies about the antineoplastic activity and its domestication conditions.carcinoma), UISO (squamous cell carcinoma of the cervix), OVCAR (ovarian carcinoma), and HCT-15 (colorectal carcinoma) (Romero et al., 2011) and recently it has been reported HeLa cells apoptosis induction by 9-oxo-10, 11-dehydroageraphorone, compound isolated from E. adenophorum (A.adenophora) (Liao et al., 2015). The aim of the present study was to examine the cytotoxic activity, morphological changes, genotoxicity, cell cycle arrest and apoptosis induction on tumorigenic human cancer cell lines by A. gracilis extracts or fractions to establish if this specie has a possible anticancer potential and if its domestication can be justified. Plant material collection and processing The plant was collected in Guasca Cundinamarca, Colombia, between 2800 and 3300 m.a.s.l, later identified at the Colombian National Herbarium as Ageratina gracilis (Kunth) R.M. King & H. Rob COL 572755. The dried ground plant material (leaves and inflorescences) were separately subjected to Soxhlet extraction first with petroleum ether and then with ethanol.The fractionation was done with these solvents in the following order: Petroleum ether, chloroform, ethyl acetate and ethanol thereby obtaining the different fractions.The cytotoxicity of fractions was performed and the exclusion of fractions was made according results, then, the study was focused in chloroformic fractions. Inhibition of cell viability assay Antiproliferative activity was determined using the 3-(4, 5-methyl-thiazol-2-yl)-2, 5-diphenyl-tetrazoliumbromide (MTT) method (Kumar et al., 2014), with some modifications.Seven thousand cells were seeded per well in a 96-well plate and grown in a 5 % CO 2 atmosphere at 37 °C for 24h before treatment.Cells dissolved in dimethyl sulfoxide (DMSO) (Sigma-Aldrich) were treated with extracts and fractions in concentrations of 2, 5, 10, 25, 50, 100 and 200 µg/mL.The maximum concentration of DMSO was 0.5 % per treatment.After 24h of incubation, MTT stock solution (Sigma-Aldrich) was dissolved in the corresponding medium without phenol red, and 100 µl of a 0.5 mg/mL dilution was added per well and then incubated for 4 h.Formazan products were solubilized with 100 µl DMSO (Sigma-Aldrich).The optical density (OD) was determined with a wavelength of 570 nm by a 96 well plate reader (BioRad 680 Model).Vincristine sulphate (VCR) (Cayman) was used as positive control to determinate IC 50 in MTT assay under the same conditions described above.The cell viability was expressed as percentage of non-inhibited cells at different concentrations by the extracts or fractions and the IC 50 value was defined as the concentration where the extracts or fractions, caused a decrease of 50 % of the cell viability.A non-linear regression was used to determine IC 50 value. Morphological changes A morphological study was performed in order to examine the cellular damages occurred after exposure to the cytotoxic chloroformic fraction from petroleum ether extract of leaves and inflorescences and chloroformic fraction from ethanolic extract of leaves, of A. gracilis at half of the IC 50 values determined before by MTT cell assay.One hundred thousand cells per well were seeded into a 24-well plate and grown in a 5 % CO 2 atmosphere at 37 °C for 24 h before treatment.Untreated and treated cells with active chloroformic fractions and VCR were fixed in absolute cold methanol for 10 min, after 24 h of incubation, then in acetone for 20s at 20 ˚C.To evaluate integrity of the microtubules in the cells, a mouse anti-atubulin monoclonal antibody DM1A (Sigma-Aldrich) was used (Mendez et al., 2014), while mitochondria analysis was performed by using of rabbit anti-COX IV monoclonal antibody (Cell signaling).Cells were incubated with 0.2 µg/mL of the mouse anti-α-tubulin or with 1.0 µg/mL of the rabbit anti-COX IV over night at 4 ˚C after blocking with 2 % (w/v) BSA/PBS.Cells were incubated with 2 µg/mL of Alexa Fluor 488 goat anti-mouse secondary antibody (Molecular Probes) or Texas Red anti-rabbit secondary antibody (Sigma Aldrich), both diluted in BSA/PBS at 2 % (w/v).Staining of DNA was done with 1.0 µg/mL of DAPI (Invitrogen).Slides mounted in antifade solution (Vectashield; Vector Laboratories) were monitored with epifluorescence Motic AE31 microscopy, captured with MoticCamPro 282A and analyzed with the Motic Image plus 2.0 software. Genotoxicity assay (comet assay, single cell gel electrophoresis) A single cell gel electrophoresis (SCGE) or comet assay determined the chloroformic fractions genotoxic potential.DNA damage was quantified by measuring the displacement between the genetic material of the nucleus (comet head) and resulting tail (tail DNA).At least 50 cells were analyzed individually.As positive control for genotoxicity VCR was used (Recio et al., 2010).Tests were performed at 24 h after exposure to chloroformic fraction from petroleum ether extract of leaves and inflorescences and chloroformic fraction from ethanolic extract of leaves fractions and VCR at IC 50 established before for each fraction.The procedure was performed according the instructions of the OxiSelect™ Comet Assay Kit, in TBE electrophoresis solution.Vista green DNA staining solution permitted to visualize the cells on epifluorescence microscopy AE31 MoticCamPro 282A.Images were captured and analyzed with Comet Score software expressing results as a DNA migration or tail DNA percent (1). Cell cycle distribution Cell cycle distribution was measured on untreated and treated A549 cells with chloroformic fractions of A. gracilis, DMSO 0.5% is the negative control and VCR is a positive control for arresting in G2/M phase.It was used a concentration corresponding to the IC 50 value determined before by MTT assay.Cells were incubated during 24, 48 and 72 h at 37˚C in a 5% CO 2 atmosphere.Cells were fixed dropwise with cold ethanol at 70% while gently vortexing, and incubated in the dark for 30 min with propidium iodide (PI) staining solution: 3.8 mM sodium citrate, 50 µg/mL PI in PBS and RNase A stock solution: 10 µg/ml RNase A. DNA content measures were performed in a BD-FACS AriaTM cytometer.Cell distribution in the Sub-G1 (possible apoptosis), G1, S and G2/M phases of the cell cycle, was analyzed by using of FlowJo_V10 software. Annexin V-FITC detection of apoptosis A549 cells were plated at 1 x 105 cells per well into a 24-well plate.The next day, cells were treated with chloroformic fractions of A. gracilis, DMSO 0.5% (negative control) and VCR as positive control for apoptosis induction, and 100 mM of hydrogen peroxide (H 2 O 2 ) as positive control for necrosis (Teramoto et al., 1999).It was used a concentration that corresponded to the IC 50 value determined before by MTT assay.Cells were incubated for 6h, and then stained with annexin V-FITC/PI (Cayman's Annexin V FITC/PI Assay Kit).Cells were examined by fluorescent microscopy using the filters for FITC (excitation/emission= 485/535 nm) and for Texas Red (excitation/emission= 590/610 nm) in a fluorescent microscopy Motic AE31 and the images were captured in a MoticCamPro 282A. Statistical analysis The statistical analysis was performed on IBM SPSS 20 software.Assumptions of the parametric analysis were determined looking for a normal Gauss distribution by Shapiro-Wilk and Kolmogorov-Smirnoff test, with homogeneity of variances (p> 0.05).The values of percentage of DNA damage were submitted to analysis of variance (ANOVA), with post-hoc HSD-Tukey, Scheffé versus DNA damage determinated for VCR by Dunnett's test.All the experiments were performed in triplicate.Significant differences between cell responses were indicated as *p < 0.05 or **p<0.01 RESULTS AND DISCUSSION Ageratina gracilis is considered a natural source for treatment of different diseases (García, 1975); however, there are not scientific reports related with its medicinal properties.The present study was aimed to evaluate the antiproliferative activity, and effects of extracts and fractions obtained from inflorescences and leaves of A. gracilis on human cancer cells, having as basis the knowing that this plant contains different compounds with possible biological activities as was previously reported (Torrenegra et al., 1984). MTT viability assay was used to evaluate cytotoxic effect.The results showed that non-polar extracts from inflorescences and leaves had a higher effect on cell viability compared with the polar extracts on all cell lines with IC 50 values between 11.2 and 52.6 µg/mL (Table 1).The A549 lung cancer cells initially showed a high resistance to the extracts in general, with IC 50 values between 34.8 and 117 µg/ml; however the chloroformic fraction from petroleum ether extract of leaves and inflorescences, and chloroformic fraction derived from ethanolic extract from leaves had an important cytotoxicity on this cell line, with IC 50 values of 17.5, 23.7 and 25.9 µg/ml respectively (Table 2).The other fractions showed a low cytotoxicity on all cell lines, reason why they were excluded of this study. Based on the results of cytotoxicity, the study was directed towards the analysis of genotoxicity and apoptotic activity of the chloroformic fractions from petroleum ether extract of leaves and inflorescences and chloroformic fraction from ethanolic extract of leaves.The genotoxic activity, was evaluated by comet assay and the DNA damage was expressed as percentage of genetic material displaced from the nucleus or "comet head" resulting in a "comet tail" induced by treatments (Fig. 1a).According with the results a high percentage of DNA damage was exhibited by the cells exposed to the three chloroformic fractions indicating that these fractions have active compounds capable to inhibit the cell proliferation.The higher genotoxicity was achieved by the chloroformic fraction from petroleum ether extract of leaves on A549 and HT29 cells, with a percentage of DNA damage of 62 % and 57 % respectively, and significantly different to the effect of the positive control VCR (p<0.05), with a percentage of DNA damage of 33 % and 35.5 % on A549 cells and HT29 cells respectively (Fig. 1b).The high percentage of DNA damage in the negative control cells exposed to 1 % DMSO was 1.5 %. Morphological analysis performed on cells exposed to those fractions, by immunofluorescence microscopy was realized in order to confirm inhibitions of cell growth and estimate a possible apoptosis induction; changes in the microtubule integrity, nucleus, mitochondria, cell shape and size were detected.As it is known currently, the mode of death and morphological changes are dependent on the cell type, and the stimuli applied with other cell types (Ziegler et al., 2004).In the inhibition of microtubule dynamics, a persistent alteration of biological processes is induced which eventually leads to apoptosis (Mollinedo et al., 2003).In this study it was found a loss of the integrity of microtubules forming the cytoskeleton on all cells exposed to chloroformic fractions derived from petroleum ether extracts, and the most affected were SiHa and A549 cell lines in response to the chloroformic fraction from petroleum ether extract of inflorescences, chloroformic fraction from petroleum ether extract of leaves affected and A549 cells, the less effect on microtubule organization was observed in the cells exposed to chloroformic fraction from ethanolic extract of leaves (Fig. 2).Simultaneously with the microtubule destabilization, different nuclear phenotypes were induced by the treatments, finding in some cells as A549 and MDA-MB231, an initial increase in size thereof relative to the negative control, even with the positive control VCR, after 24 hours (Figs. 2 and 3).Previously, it was reported that VCR binds to DNA and chromatina on cancer cells (Mohammadgholi et al., 2013), according to this, the increase in the size of the nucleus could be related with the nuclear envelope disruption which allows vincristine, and perhaps components of chloroformic fractions, enter the nucleus.Another hypothesis is the induction of mitotic catastrophe, because it is known that different classes of cytotoxic agents can induce an abnormal mitosis that results in cell death (Mansilla et al., 2006), given by factors such as abnormal nuclei, nucleus enlarging, multipolar mitoses, or multiple nuclei, characteristic of mitotic catastrophe (Maskey et al., 2013), as it was observed in this study.On the other hand, the mitochondrial material was affected by chloroformic fractions, showing the shuttling of complex IV subunit (COX IV) from cytoplasm to the nucleus, as was observed in A549 and PC3 cells treated with chloroformic fraction from petroleum ether extract of leaves and in MDA-MB-231 cells exposed to chloroformic fraction from petroleum ether extract of inflorescences.These results could support an evidence of programmed cell death induction in A549 cells, according previous reports where mitochondrial proteins were translocated from mitochondria to the nucleus after apoptotic stimuli (Moreira et al., 2014).In addition, a decrease in the number of mitochondria was observed in A549 and MDA-MB-231 cells treated with chloroformic fraction from petroleum ether extract of leaves; A549 treated with chloroformic fraction from petroleum ether extract of inflorescences and MDA-MB-231 treated with chloroformic fraction from ethanolic extract of leaves (Fig. 3).All cell lines analyzed showed alterations by exposure to chloroformic fractions; however, according with the results, the most affected cells by treatments were A549.The size and complexity changes of that cell line was analyzed performing a basic flow cytometric analysis of the parameters forward-scatter area (FSC-A) and sidescatter area (SSC-A) data after 6 h of 50.000 events of treated or untreated cells.Fig. 4 shows the dot plot of subpopulations of A549 cells stained with PI.Synchronized and negative control cells were presented as normal in size and complexity; nevertheless, upon treatment with VCR, cells presented an increasing of FSC-A and SSC-A with the 87 % of cells located in the Q2 subpopulation, between chloroformic fractions the chloroformic fraction from ethanolic extract of leaves caused the most significant alteration on light scatter properties: 77 % of cells in Q2 (Fig. 4).On the other hand, a normal distribution in the cell cycle phases was observed with the negative controls of A549 cells.The synchronized cells showed a high population in G1 phase > 80% which was decreasing with the progress of incubation time with DMSO vehicle, but without affection in the normal development.The number of cells in Sub-G1 not exceed 2% during 72 hours of incubation.Vincristine sulfate, as expected, produced an arrest in G2/M phase of the cell cycle of A549 cells, avoiding cell proliferation and hence causing cell death (Poruchynsky et al., 2015), as was evidenced in a time dependent manner of drug treatment.Instead, the chloroformic fraction from petroleum ether extract of inflorescences, chloroformic fraction from petroleum ether extract of leaves and chloroformic fraction from ethanolic extract of leaves, caused cell growth inhibition of A549 cells by blocking the G1 phase to S phase in the cell cycle (Fig. 5).However, cells treated with chloroformic fraction from petroleum ether extract of leaves decay aggressively at 48h indicating that cell cycle arrest induced by this fraction, occurred during a short time before imminent cell death.A similar behavior was evidenced on cells exposed to chloroformic fraction from petroleum ether extract of inflorescences but at 72 h, indicating that these fractions could be a highly active compound that causes rapid decrease in cell viability. The results of different test showed that treatments given with chloroformic fractions from petroleum ether extract of inflorescences, chloroformic fraction from petroleum ether extract of leaves, and chloroformic fraction from ethanolic extract of leaves are impeding cells elapse to phases of DNA synthesis and cell division inducing cell death; but it is unknown if this cell death is due to the apoptosis or necrosis process on A549 lung cancer cells.In this respect, apoptosis induction achieved by chloroformic As it is known, PI has the ability to enter a cell when it loses the permeability of the membrane, indicating late apoptosis or necrosis, therefore, PI does not stain live or early apoptotic cells due to the presence of an intact plasma membrane (Rieger et al., 2011).In this respect, the results indicated that the chloroformic fractions from petroleum ether extract of inflorescences, and chloroformic fraction from ethanolic extract of leaves induced early pro-apoptotic response on A549 cancer cell lines (Fig. 6).By the contrary, cells exposed to chloroformic fractions from petroleum ether extract of leaves showed PI staining, indicating late apoptosis at only 6 h of treatment, confirming the rapid induction of cell death seen before on cell cycle analysis after 48 h of treatment with this fraction.Positive control cells (VCR treated) showed early apoptosis, only a few cells showed late apoptosis.Cells treated with peroxide hydrogen, were rapidly labeled with both, Annexin V FITC and PI showing a typical cellular death by late apoptosis or necrosis (Brauchle et al., 2014). CONCLUSIONS In conclusion, this study confirms that Ageratina gracilis has anticancer properties on A549, MDA-MB231, HT29, SiHa and PC3 cells.The results demonstrated that chloroformic fractions from ethanolic extract of leaves, and of petroleum ether extract of inflorescences, have a high antiproliferative and genotoxic effect on A549 cell line, additionally, inducing G1 cell cycle arrest, and apoptosis, different to chloroformic fraction from petroleum ether extract of leaves, that caused apparently non-programming cell death.The data obtained is a starting point for forthcoming studies focused in the elucidation of active components present in these fractions. Fig 1 .Fig 2 .Fig 3 . Fig 1. Genotoxic activity of chloroformic fractions from petroleum ether and ethanolic extracts of inflorescences and leaves of A. gracilis on SiHa, HT29, A549, MDA MB-231 and prostate PC3 cancer cell lines after 24 hours of incubation, (a) Comet assay fluorescent images.Nuclei stained with vista green, (b) Percentage of DNA damage, according analysis of parameters by comet score software.Significant values according to ANOVA test, *p<0.05,**p<0.01. Fig 4 . Fig 4. Front and side scatters representing changes in size and complexity of the A549 cells after 6 hours of treatment with chloroformic fractions from petroleum ether and ethanolic extracts of inflorescences and leaves of A. gracilis.Cells were stained with PI and parameters were analyzed using the FlowJo_V10 software.Significant differences in values of cells in Q2 according to ANOVA test, p > 0.01**. Fig 5 .Fig 6 . Fig 5. A549 cells distribution in the phases of the cell cycle after treatments with DMSO 1% (negative control), VCR (positive control for arresting in G2/M phase), and Chloroformic fractions of A. gracilis.Cells were incubated by 24, 48 and 72 h, and stained with PI.Cells distribution in the Sub-G1 (possible apoptosis), G1, S and G2/M phases of the cell cycle, was analyzed by using of FlowJo_V10 software. Table 1 : The IC 50 values of the polar and non-polar extracts and corresponding fractions from inflorescences and leaves of A. gracilis on cervix (SiHa), colon (HT29), lung (A549) breast (MDA MB-231) and prostate (PC3) cancer cell lines after 48 hours of incubation Extract IC 50 µg/ml Values expressed in µg/mL±SE.Vincristine sulphate was used as positive control for cytotoxicity
2018-12-21T00:54:16.184Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "46f833eba9b7c2b70c2434a9a037341d881e8ec0", "oa_license": "CCBY", "oa_url": "http://www.ejfa.me/index.php/journal/article/download/491/353", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "46f833eba9b7c2b70c2434a9a037341d881e8ec0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
229180784
pes2o/s2orc
v3-fos-license
When does the tail wag the dog? Curvature and market making Liquidity and trading activity on constant function market makers (CFMMs) such as Uniswap, Curve, and Balancer has grown significantly in the second half of 2020. Much of the growth of these protocols has been driven by incentivized pools or 'yield farming', which reward participants in crypto assets for providing liquidity to CFMMs. As a result, CFMMs and associated protocols, which were historically very small markets, now constitute the most liquid trading venues for a large number of crypto assets. But what does it mean for a CFMM to be the most liquid market? In this paper, we propose a basic definition of price sensitivity and liquidity. We show that this definition is tightly related to the curvature of a CFMM's trading function and can be used to explain a number of heuristic results. For example, we show that low-curvature markets are good for coins whose market value is approximately fixed and that high-curvature markets are better for liquidity providers when traders have an informational edge. Additionally, the results can also be used to model interacting markets and explain the rise of incentivized liquidity provision, also known as 'yield farming.' Introduction With the advent of Bitcoin and, more generally, the blockchain, there has been a strong desire for automated censorship-resistant decentralized exchanges (DEXs). As blockchains provide a censorship-resistant means for executing programs in replicated state machines, many early DEX designs focused on emulating traditional market structure. These early attempts implemented data structures from conventional markets, such as limit order books, in smart contracts. However, due to both computational and latency constraints, blockchains often end up being suboptimal for order books. Additional notions for Constant function market makers. One solution to this problem is the family of constant function market makers (CFMM) [4], starting with Uniswap [2,5] which were invented as blockchain-native mechanisms for decentralized trading. CFMMs require constant space and time interactions, unlike limit order books which have O(n) space and, in many practical applications, O(n 2 ) time complexity to process n trades. (There are some implementations achieving asymptotically better results in space, such as [34], but these designs are not, to our knowledge, used in practice.) This constant space and time requirement is ideal for blockchain environments where storage is expensive, as any data stored needs to be replicated and available at all consensus nodes, while compute can be costly for end users due to transaction fees. CFMM agents and interactions. CFMMs are a special type of market maker that mediates the interactions between two principal agents: liquidity providers (LPs) and traders. LPs provide capital to the CFMM by locking assets in a smart contract that implements the CFMM. LPs are then incentivized to not withdraw capital in order to earn trading fees from traders who trade against liquidity providers' locked capital. This is implemented using the following mechanism: when a liquidity provider locks their assets into a CFMM smart contract, the smart contract creates LP shares and sends them back to the LP agent. These LP shares are tokens that are essentially vouchers for the cash flows of the CFMM-they can be redeemed at a future time for the LP's share of the CFMM's assets and a pro-rata share of fees. For instance, if an LP provides 10% of the liquidity to a pool, then upon redemption they receive 10% of the assets held in the pool, which includes the fees accrued by the pool. Relative liquidity of CFMMs. Prior work on CFMMs [5,4] has analyzed necessary and sufficient conditions for CFMMs to track an external reference market with infinite liquidity. In this setting, CFMMs are modeled as secondary markets with finite liquidity whose price is adjusted by arbitrageurs to match that of the reference market. This model allows one to answer the question of whether CFMMs can serve as price oracles; i.e., difficultto-manipulate on-chain price feeds that other smart contracts can use. However, the recent advent of incentivized CFMMs (referred to as "yield farming") has led to a number of hundred million dollar markets whose reference market is a CFMM with finite liquidity. In fact, Uniswap's volume of $440m on August 30, 2020 surpassed that of Coinbase Pro ($380m), the largest US cryptocurrency market, making a number of markets significantly more liquid on CFMMs than on centralized order books [15,29]. (See figure 1.) The natural next question to explore is: what happens when a CFMM becomes the most liquid market; i.e., when the CFMM is more liquid than the external market? In order to answer this question, we analyze how a CFMM market interacts with external markets that have finite liquidity. Our framework ( §1 and §3.1) is general enough to include external markets that are limit order books or other CFMMs. For instance, the secondary market could be Uniswap and the external market could be a Balancer pool with the same assets. With this framework, we find that an analogue of the Gaussian curvature of a CFMM's trading function dictates how much prices differ between a CFMM market and a less-liquid secondary market, after no-arbitrage is enforced. Given that CFMM trading functions need only be convex and not smooth, we construct an analogue of standard Gaussian curvature that works in the convex setting. Price stability. In practice, highly liquid CFMMs appear to have local price stability between pairs of reserve assets. Issuers of on-chain assets have used this property to incentivize additional liquidity in low-curvature CFMMs in order to reduce volatility in the targeted assets. A well-known example involves sUSD, a dollar-pegged 'stablecoin' issued through the Synthetix protocol [45]. While sUSD is intended to track the price of $1, in practice it was quite volatile around this peg, making it less useful to users seeking a stable coin price. In response, in March 2020, Synthetix incentivized the creation of a low-curvature CFMM through Curve [22] that provided trading pairs between sUSD and other, more liquid, stablecoins. Shortly after the deployment of the CFMM, sUSD appeared significantly less volatile. Our framework provides a plausible explanation for this apparent price-stabilizing effect of low-curvature CFMMs. Using the definitions provided in §1 and a basic inequality in §3, we relate the price stability to the liquidity differences between the CFMM and the external market. While this model provides a plausible explanation for the stability in sUSD price observed between March and June 2020 (green shaded region in figure 2), it does not explain the subsequent volatility experienced between June and September 2020 (purple region). This latter period corresponds to the rise of 'yield farming' in the summer of 2020, wherein a number of stablecoins traded above their dollar pegs. We extend our model in §3.3 to incorporate the interaction between yield farming and curvature in CFMMs. Liquidity provider returns. A central question, answered in [4], is to determine the expected value or payoff of holding LP shares under the assumption of no-arbitrage. This payoff can be used to compare the returns that a CFMM liquidity provider earns to those of a market maker on a traditional exchange (e.g., a limit order book). Unlike limit order books, CFMMs only offer market orders and have deterministic slippage costs. The deterministic nature of slippage in CFMMs often leads to a variety of front-running attacks and deadweight loss that miners and validators can capture at the expense of users [19]. However, this deterministic slippage also makes it easier for LPs to compute their expected payoff or expected loss. Using some results from mathematical finance [13], it can be shown that Uniswap, the first CFMM to have over $100 million in digital assets, has LP shares that should be thought of as closer to a perpetual options underwriter than a spot trading venue [18]. Combined, these properties suggest that CFMM LPs own a combination of derivative securities on the underlying tokens. With CFMMs that can dynamically adjust, such as Balancer [43], one can arbitrarily tune the payoff function of these derivative securities by adjusting the shape of the CFMM pricing curve [25]. Given that these characteristics deviate from those of conventional spot financial exchanges, it is a natural to ask what happens when a CFMM is the dominant market for a set of assets. Practical observations. The intuition for why curvature affects actual market performance comes from empirical observations that CFMMs with lower curvature can increase LP profits for mean-reverting assets. Curve [22], which was designed for trading highly correlated assets by offering lower curvature, was able to attract $1 billion in liquidity and reach $350 million in daily trading volume because LPs were able to extract more fees than in Uniswap. On the other hand, Balancer provides an adjustable CFMM curve, whose curvature controls the amount of loss engendered by LPs when large price moves occur. Uninformed trading. Using some basic results and definitions, we show in §2.1 that assets in which trades are mostly uninformed provide LPs with positive payoff whenever the curvature is low. These results also suggest that the curvature of a CFMM trading function controls how much rebalancing a given CFMM will perform. This provides a clear reason for why certain curves appear to be better in practice for mean-reverting assets (e.g., Curve for stablecoin-stablecoin trades). Informed trading. Next, we show that, in order for an LP to be profitable, the curvature of a CFMM should be large to account for the amount of information that an informed trader brings to the market. This result is an analogue of classical market microstructure results involving multiplayer games between informed traders and market makers [36,28,27]. Our formulation extends the informed trader framework for Uniswap of [6] to general CFMMs through a two-player game where the informed trader makes a maximum bet [12] on the next market price update given an informational advantage. Using the curvature framework of §1, we illustrate a condition (similar to Glosten's classical bound [27]) that connects the informational edge of the informed trader and the curvature and fees of a CFMM to the payoffs of the informed trader and the LP. We extend this single period model to a multiperiod model and make conjectures about the optimal information flow in a CFMM in appendix G. Yield farming. We also show lower bounds to so-called 'yield farming' payoffs that are sufficient to compensate LPs, based on the curvature difference between two markets. Yield farming can be seen as analogous to market-maker subsidies in conventional markets [11]. In yield farming, an LP first supplies reserves to a CFMM containing some token T and a numéraire such as Ethereum (ETH). The LP then locks the corresponding LP shares they receive from the CFMM and receives newly-minted tokens of T over time in return. By locking LP shares in the smart contract, users that provably provide liquidity for the T-ETH trading pair are subsidized. In practice, this incentive bootstraps liquidity for token T by incentivizing users to be LPs [20]. We show that these incentives can be bounded from below by the curvature of a CFMM. Portfolio Greeks. Finally, we apply our results regarding curvature to analyze financial properties of CFMM LP shares. First, we extend the frameworks of [4,18] to compute the first and second order Greeks (e.g., ∆, Γ) for CFMM LP share payoffs in the two coin case, and then use the basic liquidity framework provided in §1 to give bounds on the portfolio Greeks. While we note that these results, due to the assumptions, may be of limited practical importance, they can be used to interpret the curvature bounds in several different ways. We leave a suitable and practical version of the result as an open conjecture. Summary. These results show that curvature is a crucial design parameter to tune when designing CFMMs. A number of CFMM designs have been proposed for prediction markets [39], derivatives trading [1, 35], and self-balancing ETFs [43]. In each of these appli-cations, a CFMM LP share represents a complex payoff function that changes dramatically based on the expected trading behavior of the assets held within the CFMM. Our results show that the returns and payoffs realized by holders of CFMM LP shares are intrinsically tied to the curvature of the trading function. Moreover, they explain a number of empirical outcomes that have happened throughout the large number of different CFMMs available on Ethereum. CFMM designers need to be cognizant of trade-offs that are made by adjusting curvature, especially as payoff functions become increasingly complex. These results also provide guidance on how to parameterize yield farming incentives to achieve certain liquidity targets. As a number of yield farming assets have failed due to over-incentivization of liquidity [44,33], it is increasingly important to understand how to efficiently incentivize onchain liquidity. These results provide ways to sensibly optimize incentives to meet liquidity goals. Two asset market model In this section, we define basic terms and models used throughout the remainder of the paper. In particular, we will define what it means for a market to be 'stable' or 'liquid.' We note that, while some markets in practice can simultaneously trade n coins for n coins, we will focus on the case where the market only trades two coins, and this is the model we will use in the majority of the paper. (We will sometimes present the n coin generalizations, as in, e.g., appendices E and I, when they are simple, but this is the exception rather than the rule.) In this case, we will call one asset the traded coin, and this asset is distinct from the numéraire, which is the base currency used to measure prices. Unless otherwise specified, all trades (which we will often simply call ∆) are positive when they are buying ∆ amount of the traded asset, and negative when they are selling it. Price impact function. We will define the price impact function g : R → R ++ of a market to be the function that connects the market's marginal price before the trade, which we will call m 0 , to the price after the trade. More specifically, we have that g(0) = m 0 , while g(−∆) specifies the CFMM's marginal price immediately after the trade, which sells ∆ of the traded coin to the market, is performed. We assume two basic facts about g: one, that g is continuous, and, two, that g is nondecreasing. In other words, g is a continuous function that expresses how the market's price changes after a given trade, with the assumption that trades of size 0 (i.e., null trades) don't change the market price, and that larger trades lead to higher marginal prices. Note that these assumptions are common in the order-book literature (see, e.g., [7]) and true for all CFMMs (see, e.g., [4]). Additionally, this assumption is equivalent to the convexity of the quantity function q(∆) = ∆ 0 g(t) dt, which is also a common assumption in the classical economics literature (see [26]). No-arbitrage. A common way to model interactions between different markets is the assumption of no-arbitrage. One way of stating this assumption is through the existence of an agent, called an arbitrageur. This agent is allowed to borrow any amount of coin ∆, trade it between the available markets (to receive some amount of coin, say, ∆ a ), and then pay back the borrowed amount ∆, to receive ∆ a − ∆ profit. If there exists a trade which guarantees that ∆ a > ∆, we then say that there is an arbitrage opportunity which we assume the arbitrageur will execute to receive strictly positive profit. In our presentation, we will assume that there is an (infinitely-liquid) reference market with fixed price m a . (We provide a generalization to reference markets which do not have infinite liquidity in §3.) An arbitrageur would then attempt to maximize its profits by trading some amount of coin ∆ between both markets, which would give a total payoff of ∆ 0 (m a − g(−t)) dt. A necessary condition for this quantity to be maximized is that the marginal price of both markets after the no-arbitrage trade, which we will call ∆ , must be equal; i.e., that ∆ must satisfy which follows from the first-order optimality conditions applied to the arbitrageur's payoff. Without loss of generality, we will assume that m a ≤ m 0 = g(0). Note that this is always possible in a two-coin economy by changing the choice of the coins to be traded; i.e., by swapping the places of the traded coin and the numéraire, the resulting market prices are 1/m a ≤ 1/m 0 , which satisfy the above inequality. Because of this assumption and the fact that g is a nondecreasing function, we will have that ∆ ≥ 0. (We discuss this further in the case where the market is not infinitely liquid in §3.1.) Price stability. We will say that the price impact function g is µ-stable (with µ ≥ 0) if it satisfies: In other words, we say that the price impact function g for some market is µ-stable whenever a (nonnegative) trade of size ∆ does not change the market's price by more than µ∆. Because g is increasing by assumption, when ∆ ≥ 0, inequality (2) is equivalent to the sensitivity-like bounds, as both sides are nonnegative. There are several useful sufficient conditions for (2) to hold. For example, it suffices that its first derivative is bounded from above by µ for all ∆ ≥ 0; i.e., ∂g(−∆) ∂∆ ≤ µ, but this is not a necessary condition, as the function g need not be differentiable in its second argument. We show a connection between this sufficient condition and the curvature of a trading function for a given CFMM in appendix C; we also discuss explicit bounds for µ for some CFMMs used in practice, and how they relate to common intuition, in the following section. Liquidity bounds. We will say that a price impact function for a market is κ-liquid if it satisfies In other words, selling ∆ coin to the market decreases the reported price by at least κ∆, which implies that there is some amount of price slippage that is linearly bounded from below by a factor of at least κ. Additionally, we note that it is possible (and, in fact, common for many markets) that a trading function is both µ-stable and κ-liquid, with κ ≤ µ. Trade sizes. It is often the case that bounds of the form of (3) (and, sometimes, bounds of the form of (2), as we will see later in this section) are not global, but, instead, hold over some interval of size L; i.e., such bounds only hold for 0 ≤ ∆ ≤ L. In these specific cases, we will mention the corresponding conditions in the statements, as required for the results to hold for CFMMs in practice. On the other hand, we note that all of the proofs presented have results which immediately carry over in this case, even when the interval is not explicitly mentioned. (We leave such extensions as simple exercises for the reader.) CFMM curvature In decentralized finance, the market we are studying is almost always a constant function market maker or CFMM. (See, e.g., [4] for an introduction.) In this case, the markets' behaviors are specified by (often simple) mathematical formulae, and often have closed-form solutions for the constants µ and κ. In this section, we will show how to compute µ and κ for some of the CFMMs used in practice. Constant function market makers. A CFMM is an algorithmic market maker [48,47,32] defined by its reserves, specifying how much of each coin is available for trading, and its trading function, which controls whether the market maker will accept or reject a proposed trade. The reserves are given by R ∈ R + for the coin to be traded, and R ∈ R + for the numéraire coin, while its trading function is given by ψ : R 2 + × R 2 → R. The trading function maps the pair of reserves (R, R ) ∈ R 2 + and a trade, purchasing ∆ of the coin to be traded and ∆ of the numéraire, (∆, ∆ ) ∈ R 2 , to a scalar value. The CFMM then pays out ∆ of the traded coin to the trader and receives ∆ of the numéraire. This results in the following update to the reserves: R ← R−∆ and R ← R +∆ . (Negative values of ∆ and ∆ reverse the flow of the coin.) For notational convenience, we will abuse notation slightly by writing ψ(∆, ∆ ) for ψ(R, R , ∆, ∆ ), such that the function ψ(·, ·) implicitly depends on the reserve values (R, R ) in the remainder of the paper. Additionally, because our focus is on the two coin case and not the general n coin case, we use different notation than [4] to prevent overly-cumbersome proofs and results. We show the exact connection between both forms in appendix A. Marginal prices. Given a CFMM with trading function ψ, the marginal price at these reserves is given by [4, §2.4]: where ∂ i ψ denotes the partial derivative of ψ with respect to the ith argument and ∆ ∈ R is the (usually unique) solution to ψ(∆, ∆ ) = ψ(0, 0), for a given ∆ ∈ R. Note that this is only defined whenever ∆ ≤ R and ∆ ≥ −R ; i.e., when there are enough reserves to complete the trade. Because it is often the case that ψ satisfies the condition ∆ = ∆ 0 g(−t) dt ≤ R , even as ∆ ↑ ∞, we will consider these bounds to be implicit, unless otherwise stated. Marginal prices with fees. While it is possible to implicitly include fees in the definition of the CFMM's trading function, it is often simpler to include the fee explicitly. In many cases such as Uniswap, Balancer, and Curve, the fee is given by some number 0 < γ ≤ 1 such that (1 − γ) is the percentage fee taken for each trade, and the fee-less CFMM, with trading function ψ, is modified in the following way [4, §3.1]: where ψ f is the CFMM trading function with fees, for trades which sell some amount of coin ∆ ≤ 0 to the CFMM. The reserves are updated in a similar way as the original CFMM. We note that the case where ∆ ≥ 0 can be derived by appropriately exchanging the traded coin and the numéraire. The directionality here comes from the fact that fees are usually charged 'on the way in,' or, in other words, asymmetrically charged to the coin being sold to the CFMM. (See, e.g., appendix A.) In this case, we can write the marginal price of a given trade of size ∆ ≤ 0 after fees in terms of the marginal price of the original CFMM, since where ∆ is the (usually unique) solution to ψ(γ∆, ∆ ) = ψ(0, 0), and, as before, ∆ ≤ 0; i.e., we are selling the coin to be traded to the CFMM. Given that we can express the price impact function g f with fees in term of the fee-less price impact function g, the next problem is to find bounds of the form of (2) for fee-less CFMMs, which we show for a few special cases. Constant sum market maker. The simplest example of a CFMM is the constant sum market maker, whose trading function is the constant sum trading function: In this case, the marginal price is, from (4): whenever −R ≤ ∆ ≤ R and is otherwise undefined. In this case, we have the following bound, whenever 0 ≤ ∆ ≤ R : with curvature bound µ = 0. (This is simply due to the fact that a constant sum market maker always reports a fixed price, so long as it has nonzero reserves.) Similarly, we also have κ = 0, on the same interval, by the same argument. Global bounds on µ for convex impact. While the constant sum market maker has a simple enough trading function that it can be analyzed directly, analyzing other trading functions in the same way can quickly lead to very complicated results. A useful and simple condition applies in the common case that the price impact function, g, is a differentiable convex function. In this case, we have that, for all valid ∆, where g (0) is the derivative of g evaluated at zero, by the first-order condition for convexity [8, §3.1.3]. This can be written as Setting µ = g (0) yields the desired result. When g is differentiable, taking the limit as ∆ ↓ 0, shows that this is the tightest possible bound on µ over any nonzero interval size. (If g is not differentiable, we can take µ to be the largest subgradient of g at 0, which is the tightest possible bound by the same argument.) Local bounds on κ for convex impact. On the other hand, given any interval of size L ≥ 0, we can give a κ-liquidity bound for g, by noting that, because g is convex, the definition of convexity gives the following inequality: A basic rearrangement shows: and setting κ = (g(0) − g(−L))/L gives the result. Since the bound is tight at ∆ = 0 and ∆ = L, this κ yields the tightest possible bound along this interval. Uniswap. One of the simplest nontrivial convex bounds is the bound for Uniswap (or constant product market maker) with no fees, where we can write where k = RR is the product constant [5, App. A]. If R > ∆ (i.e., there are enough reserves to carry out a trade of size ∆) this function is a convex function, as x → 1/x 2 is convex over the positive reals. Using inequality (5), we have, for all ∆ ≥ 0, In the special case where the marginal price at the zero trade is g(0) = 1 (as is common with stablecoin-stablecoin markets), which happens when R = R , we can re-write µ in terms of the portfolio value of the reserves as where the portfolio value is given by Similarly, for any interval size L ≥ 0, we have Note that both µ and κ both decrease when R increases as liquidity is added (e.g., via Uniswap's addLiquidity function) for a fixed price g(0) and interval size L. In other words, Uniswap's effective curvature decreases as the reserves increase, as one might intuitively expect. We can interpret this as, for a fixed trade cost, larger trades can take place in Uniswap with higher reserves. An alternative interpretation is that the cost of manipulation also increases in the reserve size R. These results were proven in [5, §2.3] using different techniques which do not easily generalize. Two asset Balancer. For Balancer (which is also sometimes called the constant mean market maker [5, §3], or the geometric mean market maker [25]) with two assets and weight τ ∈ (0, 1), we have the trading function Let ξ = τ 1−τ for notational convenience. We then have: Note that when τ = 1 2 then ξ = 1 and this price impact function is equal to Uniswap's, generalizing result (6). This function is convex since x → x −(1+ξ) is convex over the positive reals for any ξ > −1. This implies As with Uniswap, we can give a simple expression for µ in terms of the portfolio value in the special case where g(0) = 1: where the portfolio value is given by P V = g(0)R + R = R/τ . (Note that the expression for µ is symmetric about τ = 1/2, even though the portfolio value P V is not.) Similarly, for any interval of size L, we have that As before, we have that both µ and κ are decreasing functions in the reserves R for a fixed marginal price g(0) and interval length L. We can additionally recover the Uniswap bounds on κ and µ by setting τ = 1/2, or, equivalently, ξ = 1. Two asset Curve. Another very popular CFMM is Curve [21], with trading function It is worth noting that, as α/β becomes large, then ψ is approximately close to the trading function for the constant sum market maker. Similarly, as α/β becomes small, ψ becomes similar to the Uniswap trading function. (See, e.g., figure 3, which shows that µ for Curve converges to the curvature constants for constant sum as β ↓ 0 and Uniswap as β ↑ ∞.) One can imagine variations on Curve where the product term is replaced by the reciprocal of the weighted geometric mean, as with Balancer, or a number of other functions. The marginal price function for Curve is relatively complicated, but can be derived with some work (see [4, §2.4]): where figure 4 we see that g is indeed convex for a number of parameters β (setting α = 1 without loss of generality, as g is homogeneous of degree zero with respect to (α, β)). Showing that g is convex is rather more involved; we provide a proof in appendix D. While, in general, µ = g (0) can yield complicated expressions for Curve, the special case where g(0) = 1 can be written as a simple function of the portfolio value: where P V = g(0)R + R = 2R. This provides a simple expression for the maximum slippage a trader can expect for a given trade size when assets on Curve are trading at their peg. Examples in practice One of the simplest examples of curvature impacting trading performance is in stablecoin trading venues. Stablecoins, which are assets that aim to be approximately pegged to a fiat numéraire such as the US dollar, are assets with an approximately constant price due to their peg. However, their prices and outstanding supply often differ for systematic reasons. For instance, one type of asset might only be centralized and only allowed to be created by non-US entities (e.g., Tether), whereas another asset is more decentralized (e.g., MakerDAO). The former stablecoin might have a small number of large institutions using the coin whereas the latter is likely to have more small participants. If this is the case, it will naturally be easier to perform bigger trades in the former currency. A CFMM designed for stablecoinstablecoin trading, such as Curve, needs to have curvature that is adapted for these types of trading. Uniformed trading. Generally speaking, the trading of stablecoins for stablecoins tends to be uninformed. That is, users often trade these stablecoin pairs because a specific smart contract or exchange that they want to interact with only allows the use of a particular stablecoin. There is no information about the direction of trading and, given the ease of creation-redemption arbitrage for stablecoins, there is little use in trying to predict stablecoin prices [40,31]. Curvature. Intuitively, then, venues for stablecoin-stablecoin trades should have relatively low-curvature price impact functions, which would entice traders due to the small price slippage, and entice liquidity providers due to the small opportunity cost. Taking this idea to its extreme, one might argue that assets that are supposed to be the same value should be traded on a CFMM with zero curvature. An example of such a market is mStable, which uses the constant sum trading function presented in §1.1. These curvature-less markets have trouble responding to price, as they effectively quote a fixed price for any trade performed. In practice, as illustrated in figure 6, we see that Curve generates almost an order of magnitude more in trading fees for LPs than mStable. This is driven by two phenomena. First, a zerocurvature AMM will end up less liquid in practice because it quickly runs out of reserves as price fluctuates. In the case of mStable, there is a chronic shortage of Dai for trades as Dai is frequently trading above its peg. Second, LPs face maximal opportunity cost relative to a low-curvature CFMM with the same assets and fees. Most of the trading volume and liquidity provision on mStable appears instead to be driven by yield farming. We discuss such incentives in more detail later in §3.3. LP returns and curvature Empirically, it has been observed that returns to LPs in CFMMs are closely tied to both the shape of the CFMM trading set and the properties of the price process of the two assets. Curve's advantage over Uniswap for mean-reverting, low volatility assets led to it attracting significantly more trading volume for certain assets. As shown in §1.1, Curve has a low curvature regime around a particular price and a high curvature region far away from this price. This design was chosen to optimize profits earned by LPs for mean reverting assets while allowing traders to place large sized orders when assets are near their mean. A natural question to ask is: how much does adjusting the curvature of a CFMM for such assets affect LP returns? Liquidity provider portfolio value. The LP portfolio value is defined as the value of coins that an LP has locked in a CFMM. If a user owns b ∈ [0, 1] percent of the LP shares in a CFMM, then they own the right to claim bR of asset 1 and bR of asset 2, where R and R are, as before, the reserves of the CFMM. In this subsection, we will formalize heuristics arguments of [21, 23] which show that LP portfolio values are directly affected by the curvature of a trading function. We will also generalize these claims to generic price processes interacting with CFMMs by considering adverse selection towards LPs. In particular, we will consider how LP returns are affected by informed traders, who have an estimate for the probability distribution of future prices. These results will illustrate that the design of efficient CFMMs for a variety of markets depends on how the curvature is adjusted to ensure that LPs can be profitable. In particular, we will see that, for stablecoins, where most trades are uninformed, low curvature improves performance and LPs still come out with positive profit even for large trades, while markets where there exist informed traders with a bigger edge, higher curvature is preferable to prevent downside LP losses. Uninformed trading and low curvature CFMMs In this scenario, we consider a trader who wishes to buy some amount of coin ∆ ≥ 0 from the market. Such purchases will cause some amount of slippage in the reported price g, causing some loss to the LP, which may be recouped with fees. The question is, assuming that this CFMM is the only available market, what is the largest trade that a trader can perform such that a liquidity provider still has positive payoff from the trade? Curvature and profits. Formally, suppose that an LP provides all of the assets R, R ∈ R + to a CFMM with µ-stable price impact function g. We assume that the CFMM charges some fee (1 − γ), but this fee is not included in g. The no-fee portfolio value of this CFMM is: After a price change to m a = g(−∆) ≤ g(0), the opportunity cost (sometimes called the 'impermanent loss') of this portfolio is given by and, by definition of marginal price Since g is a nonincreasing function, we have is a lower bound on the opportunity cost. Here, the second inequality follows from the definition of µ-stability. On the other hand, if g is a marginal price function with some fee 0 < γ ≤ 1, the value of fees earned is at least (1 − γ)∆g(−∆) = (1 − γ)∆m a , where m a is the new price (see appendix B for a general statement and proof), so LPs are guaranteed to make a profit whenever (1 − γ)∆m a > µ∆ 2 , which happens when Discussion. This inequality shows that a sufficient condition on the trade size for which LPs still make a profit is inversely proportional to the curvature bound on the CFMM. Additionally, using this formula, we can compute a lower bound on the rate of growth of profits for LPs as a function of fees and curvature, given a distribution of trades, Prob[∆ ≤ x]. An extension of this result can provide a discrete time, curvature-based analogue of [51] that generalizes to a number of CFMMs other than Uniswap. Another interpretation of inequality (7) is that, as the effective curvature decreases, traders can perform large trades, while liquidity providers still come out ahead, relative to an equivalent portfolio which simply holds R of the traded coin and R of the numéraire. Note that this inequality comes from the fact that the trade does not depend on the future price of the coin, which we will call an 'uninformed' trade, and such trades are, as discussed previously, very common in stablecoin-stablecoin trades. Informed trading On the other hand, liquidity provider losses change drastically when we have an agent who attempts to maximize their profits given information about future prices. In order to model this phenomenon, we need to describe a participant other than the LP, who has some amount of knowledge of future prices. Analogous to [6] and the classical market microstructure models [36,28], we will consider a market with an LP and an informed trader under the assumption of no-arbitrage. We will construct a two-player game between an informed trader who can predict the next price update of some external market with non-trivial edge, and a liquidity provider whose funds are locked in the CFMM. Using this game, we will show a profit (or loss) lower bound for both LPs and informed traders, akin to those used to describe market maker profits in open limit order books [28,27]. From this lower bound, we will show that informed traders need less of an informational edge to guarantee that trading with a lower curvature CFMM has profits as large as trading with a higher curvature CFMM. Problem set up. In this game, as before, we have two agents: a liquidity provider and an informed trader, where the informed trader is allowed to trade with the CFMM. We will assume that the CFMM, with fee-less marginal price function g (such that the marginal price with fees is g f (−∆) = γg(−γ∆) with ∆ ≥ 0) and the reference market both start at some fixed price m 0 = γg(0). We will assume the function g is µ-stable and κ-liquid in some interval 0 ≤ ∆ ≤ L. The informed trader then knows that the reference market price will decrease to some amount m 1 ≤ m 0 with probability α or stay at m 0 with probability (1 − α). By no-arbitrage, any price discrepancies between the CFMM and the reference market are immediately removed, so the informed trader must make a trade which maximizes the expected profit, before the trader is able to see the new price. Informed trader edge. The expected edge of an informed trader under this framework is given by We can rewrite this in a slightly simpler form by noting that, by assumption, m 0 = γg(0), so Using the fact that g is µ-stable, we have g(−γt) − g(0) ≥ −µγt, and that Taking the supremum of both sides over ∆ ≥ 0 (since an informed traders seek to maximize their profit) gives that Here, the surprising fact that the fee γ appears in the denominator (versus, as one might expect, in the numerator) happens because the price m 0 = γg(0) depends implicitly on the fee. This result holds independent of the interval size L if g is µ-stable for all ∆ ≥ 0. Note that, if µ is very small (i.e., the CFMM has low curvature) then E V is large; similarly, given two CFMMs, one with lower and one with higher curvature, α, the edge, needs to be larger in the CFMM with higher curvature to achieve the same lower bound for the payoff. Liquidity provider loss. We can similarly get a lower bound on the expected loss of an LP since it is equal to −E V (∆): Minimizing both sides gives whenever α(m 0 − m 1 ) ≤ Lκγ 2 (i.e., when the unconstrained minimum lies in the interior of the interval [0, L]) and is otherwise bounded by −E V ≥ κγ 2 L 2 /2 − α(m 0 − m 1 )L. We will mostly consider the first case in the following discussion, since we can often expand the interval L to be large enough to contain this bound. (Note also that (8) is equivalent to giving an upper bound on the expected value of an informed trader.) Discussion. This matches the empirical observation that lower curvature CFMMs tend to have higher liquidity for assets that do not require much information to trade whereas LPs of higher-curvature CFMMs lose less to informed traders. This result illustrates that unlike common wisdom in the CFMM design space, one need not only have an optimal fee to maximize LP returns, but one needs to adjust the curvature as well. Moreover, this result represents an analogue of classical microstructure results that show that the shape of an order book gives bounds on adverse selection. Glosten [27] showed that when you consider market makers who have to quote prices on multiple markets, then the shape of the order book impacts how liquidity changes in response to adverse selection. In figure 7, we see two different shapes for an order book, one approximately concave (the unshaded bars) and one convex (the filled in region). When a market maker observes or realizes adverse selection costs, they make a market more illiquid (e.g., by canceling orders) to force active traders to pay a higher impact cost to market makers. This leads to the higher curvature, concave shape seen in figure 7. Equation (8) then suggests that market makers in CFMMs can replicate the same effect by increasing the curvature, therefore increasing the curvature lower bound, κ. Price stability and yield farming In this section, we will describe price stability when arbitrageurs trade between two markets, each with different curvature bounds. This stability result provides a quantitative explanation for the stability phenomenon of figure 2. Given that the empirically observed sUSD price instability was due to liquidity incentives (yield farming), it is natural to expect that there is a relationship between curvature and the precise costs of a stability incentive. In other words, the question we seek to answer is: how much do we need to pay LPs for providing liquidity? We describe this connection precisely in §3.3, which shows that there is an optimal liquidity subsidy for a market that interacts with an external market with different curvature. Model description Here, we will define the market model used throughout the remainder of this section. In our model, we have two available markets: the external market (whose price fluctuates due to extrinsic demand) and the secondary market (which we will assume is a CFMM, though the model holds more generally), along with an arbitrageur agent which seeks to maximize their profit by exploiting the difference in price between these two markets. Market model. We describe a relatively simple, but very general, model of the external market and how it interacts with the given CFMM. In particular, the external market reports some strictly positive price m 0 ∈ R ++ at the start of the round. The basic model of interactions between the markets and the arbitrageur proceeds as follows: 1. At the round start, the quoted external market price is m e 0 , while the secondary market price is m s 0 . 2. An arbitrageur then trades with the external market and the secondary market (which will usually be a CFMM). This results in a new external and secondary market price m a which are equal since no-arbitrage has been enforced. 3. The external market price then changes from the no-arbitrage price m a to a new price by some process modeling external influences. Step 1 is repeated with the new external and secondary market prices. In fact, in our presentation, we will not assume anything about the dependence of the new price on m a or even on m e 0 or m s 0 , which means that the results here hold for essentially all (reasonable) models of exogenous price changes for the external market price. Main goal. We will show that, even when the external market price m e 0 differs widely from the secondary market price m s 0 , the (new) arbitraged market price m a does not differ too much from the previous secondary market price m s 0 . Written out, we wish to find conditions such that, even when the price difference between both markets before no-arbitrage, m e 0 −m s 0 , is large, the difference between the no-arbitrage price and the secondary market's price m a − m s 0 is small, in a precise sense. This would imply that even though the external market price deviates from the secondary market price m e 0 − m s 0 , the secondary market is able to force the new no-arbitrage price, m a , back to a price that was close to its previous value, m s 0 . Assumptions. In this set up, the external market will have a price impact function f which is, as before, continuous and nondecreasing. We will define the initial price of the external market as m e 0 = f (0). Additionally, we will assume the external market is κ-liquid, with a slightly different definition than the one given in §1.1: we will say an external market is κ-liquid if it satisfies, for ∆ ≥ 0, This differs from the original definition given in §1.1 because, here, ∆ is the amount purchased from the external market, rather than the amount sold to it. In this case, if f is a differentiable convex function, we have that κ = f (0) is the tightest possible constant κ satisfying this condition and is a global bound (holding for all ∆ ≥ 0) which follows immediately from the first-order convexity conditions. As before, we will simply assume that the secondary market, with continuous, nondecreasing price impact function g, is µ-stable with the usual definition given in §1.1. We will similarly define m s 0 = g(0). Stability and Curvature In this section, we derive the main result for general, continuous price impact functions, satisfying the conditions outlined previously. Main result. Assume the price impact function of the primary market, f , is κ-liquid (in the sense above) and that the price price impact function of the secondary market, g, is µ-stable. We will show that the secondary-market's price change is bounded in the following way: Note that because both sides of the inequality are nonnegative, inequality (9) can also be written as In other words, the no-arbitrage price change is at most a factor of µ/κ from the difference between the primary and secondary markets. This quantity (and therefore the price change after arbitrage) is small whenever the secondary market is very liquid (µ is small), or when the external market is very illiquid (κ is large). While apparently simple, we show in §3.3 that this result can be applied to many useful circumstances. Proof of main result. By assumption, we have . Then, if we can find any ∆ ≥ 0 that satisfies then there exists some 0 ≤ ∆ ≤ ∆ such that by continuity and monotonicity of f and g. As before, the no arbitrage price m a is defined as m a = f (∆ ) = g(−∆ ). To show that there exists a ∆ satisifying (10), note that any ∆ ≥ 0 which satisfies automatically satisfies (10) since where the first inequality follows from (3), the second inequality follows from (11), while the last inequality follows from the fact that g is a nondecreasing function. In order to satisfy (11), we can easily choose Such a ∆ will then satisfy (10). This, in turn, implies that a no-arbitrage trade ∆ satisfies 0 ≤ ∆ ≤ ∆, and Here, the first equality follows from the definition of m s 0 and g, while the first inequality follows from the monotonicity of g and the second inequality follows from (2). The resulting inequality is the one given in (9). Assumptions. While the assumption that m e 0 ≤ m s 0 might appear restrictive, it is actually fully general. For example, if f (∆) and g(−∆) specify the price of an asset A with respect to a tradeable asset B, after buying ∆ amount of asset A from the primary (or secondary) market, then the amount of asset B traded with each market is given by the quantity functions On the other hand, we may ask what the price of asset B is with respect to asset A after buying some amount ∆ of asset B. The quantity of asset A received for ∆ of asset B is easily seen to be p −1 (∆ ) for the primary market and q −1 (∆ ) for the secondary market. Both exist because f and g are strictly positive which implies that p and q are strictly monotonic. This implies that the respective price (using implicit differentiation) is given by where we have defined ∆ = p −1 (∆ ), and similarly for q. So, if m e 0 ≥ m s 0 , we may always 'swap' asset A for asset B in this sense, such that the resulting marginal prices are given by 1/m e 0 ≤ 1/m s 0 , and enforce no-arbitrage conditions over coin B, instead. Extensions. As with §1.1, may not be the case that constants µ or κ exist for trades of all possible sizes. In general, such constants do exist for trades of bounded size, say 0 ≤ ∆ ≤ L. In this case, the main result extends immediately in the following way: for any prices satisfying we have The proof of this statement is identical to the one above, with the additional condition that the primary and secondary market prices differ by no more than κL. Yield farming subsidy One of the main drivers of the growth in CFMM usage in 2020 was yield farming. Yield farming, which is similar to maker-taker rebates in traditional trading [11], involves subsidizing the provision of liquidity for a new issued crypto asset. Suppose that some asset is issued at time t 0 by a smart contract and that it has an inflation schedule i t ∈ R + , where i t is the number of units of X produced at time t. In order to incentivize liquidity between the new asset and a numéraire, the smart contract reserves some percent (say, t ) of inflation for liquidity provision. If a user creates LP shares for a CFMM, which trades this coin and then stakes (or, in other words, locks) these shares into a smart contract, then they receive some amount of this coin from the smart contract for providing liquidity. For instance, if there are 1000 LP shares locked into the contract and a single user created 100 of these locked shares, they might receive 100 1000 i t t units of the new asset at time t. By subsidizing liquidity, the smart contract issuing the new coin can ensure that users can trade the new asset while also ensuring that liquidity providers have lower losses. The main loss that CFMM liquidity providers face is 'impermanent loss' or losses due to the concavity of the portfolio value of an LP share; see, e.g., [18] for the specific case of Uniswap. One can also directly show these losses occur by using the definition of the portfolio value in [4]. Sufficient subsidy. A protocol designer would then want to ensure that LPs are compensated enough, say, by some amount R in the traded coin, to have nonnegative profit after a no-arbitrage trade with the external market; i.e., we need to guarantee that the portfolio value of an LP, after arbitrage and subsidy, is nonnegative. To do this, note that the opportunity cost or 'impermament loss' of being an LP in the secondary market is given by (in a similar way to §2.1): where we have used the fact that g is nondecreasing, and g(0) = m s 0 by definition. Applying (9) gives (m a − m s Therefore, to incentivize LPs to continue adding liquidity to the µ-stable secondary market, assuming an external market that is κ-liquid, it suffices to subsidize the LPs by some amount in the numéraire. Alternatively, they can be subsidized by at least in the traded coin, since m e 0 ≤ m a ≤ m s 0 . In other words, the total quantity of subsidy is proportional to the curvature of the primary market, and inversely proportional to the curvature of the secondary market. If we define h = m e 0 /m s 0 to be the percentage growth of the asset (note that h ≤ 1 since m e 0 ≤ m s 0 ) then we have simple expression for the amount of subsidy that is sufficient, in the traded asset, which we will define as: Discussion. This gives a simple condition which guarantees that liquidity providers have nonnegative returns for providing liquidity. In particular, we note that more subsidy has to be provided as h becomes small (i.e., the price is changing with large drift) or when µ/κ is large (i.e., the secondary market, for which the LPs are providing liquidity for, is very illiquid when compared to the external market). In general, this means that how much subsidy one might need to provide depends not just on the drift of the asset, but also the relative curvature of the two markets. Dynamic hedging of CFMMs We describe how dynamic hedging quantities for contingent claims (e.g., an asset's Delta and Gamma) are constructed for CFMMs, providing a means for LPs to hedge risk such as so-called 'impermanent loss.' In order to simplify notation, we will refer to the Delta and Gamma of a portfolio as P ∆ and P Γ , respectively. To connect portfolio value to curvature, we will show that analogues of dynamic hedging Greeks are closely related to curvature. These quantities are important for liquidity providers as they represent the net exposure to the underlying collateral the LPs have, as well as how to hedge loss due to drift (known colloquially in decentralized finance as 'impermanent loss'). While Uniswap can be statically replicated by a portfolio of options [18], it is unclear precisely how to do this for generic CFMMs. We will compute dynamic hedging quantities for LPs here and show how they behave under trades. This behavior, which will be connected to curvature, represents extra convexity in the replicating portfolio that is needed to compensate for the LP share of a high curvature CFMM. Approximate hedges. We apply these hedging results to try to construct approximate hedges for impermanent loss. If the price of the traded coin increases while volatility is mollified, the LP realizes impermanent loss [5,51]. As the price drifts up, if the LP is able to increasingly sell put options for the tradable asset, then they can lower their realized impermanent loss when they exit the pool. The question then is: how many put options does an LP need to sell as a function of the price upon their entry intro the pool? We illustrate that if options on realized impact costs exist, then an LP can enter a short put option portfolio on impact costs to hedge their impermanent loss. Such options correspond to options on the change in the market price when a trade of size ∆ is made. Owners of such an option effectively have insurance on price changes in the underlying due to a single large trade. Delta hedging in practice. While unnatural relative to conventional options on stocks, options on impact cost exist in traditional finance when considering American Depository Receipts (ADRs) in the US equity market. Suppose that there is a stock that exists both in a non-US market and on a US exchange as an ADR. ADRs allow for the non-US stock to be traded in the US via a synthetic asset that has a creation and redemption mechanism. When the foreign exchange rate between the non-US currency and US dollars drifts wildly, there is an impact of foreign exchange rate on option pricing. The difference in price between options on the non-US stock and options on US ADRs replicates an option on the impact cost of the foreign exchange rate on the stock price [41,42,3]. Computing the Greeks The portfolio value of the reserves, as before, is defined as where R and R depend implicitly on the price m and the value of ψ(0, 0). We then have: which we show below. (Here P ∆ and P Γ represent the corresponding Greeks of P V .) Computation. Because it is often the case that the function ψ is of the following form (when there are no fees) ψ(R, R , ∆, ∆ ) = Ψ(R − ∆, R + ∆ ), for some function Ψ : R 2 → R (as is the case with all examples presented in §1.1), it suffices to consider the function Ψ in terms of only the reserves R and R . We will assume this is true in the following derivation. As before, the portfolio value is given by where m is the market price, which, by no arbitrage, must satisfy (at the reserve values R and R , after no arbitrage) (Note that there is no negative sign here, due to the definition of Ψ.) By implicitly differentiating Ψ(R, R ) over any level set, we have that Using this final condition: as required, while the expression for P Γ follows from the fact that Approximate hedging. On the other hand, the deterministic price schedule of CFMMs also implies that directly hedging trade quantities is equivalent to hedging prices if the CFMM is the primary market. In the case that g is convex, which we assume here, we can give bounds that can be easily replicated with options. We note that this result only holds for decreasing prices, due to the assumptions on µ and κ, but we suspect there exist more general bounds, holding over any change in price, which are very similar in spirit. Suppose that an LP has reserves R and R in a CFMM and the CFMM is the primary market. First, note that, After a trade of size ∆, we have where we used (4) in the last equality. Therefore, if our CFMM is µ-stable in the sense of (2), then, because g is convex, nondecreasing, we have that g (−∆) ≤ µ, so Finally, because d∆ dm ≤ 0, then we have that If the CFMM is both µ-stable and κ-liquid, then we have the following expansion: These inequalities show that the curvature constants provide means for super and subhedging of LP share risk. In particular, these linear bounds allow for an LP to hedge their risk using simpler instruments, as described in the sequel. Finally, we note that using these calculations, one can show that hedging a portfolio of n stablecoins is approximately equivalent to hedging two stablecoins via an n-dimensional generalization presented in Appendix E. Conclusion and future work In this paper, we explored how the shape of a constant function market maker affects its ability to serve as the primary market for digital assets. To do this, we defined a notion of curvature for a given market and then showed its implications for liquidity providers in the market, showing that this notion of curvature is very closely related to the intuitive notion of curvature in the case of constant function market makers. Liquidity provider returns. There has been much empirical evidence suggesting that the return profile of a CFMM liquidity provider depends on the shape of the CFMM's trading function. We studied the LP returns under three different scenarios: uninformed trading, informed trading, and yield farming. Some of the results presented take inspiration from the traditional market microstructure literature and consider the return profile of informed traders. These traders can be viewed as bringing information to the market by placing a Kelly-style bet on the next price update that will take place in the CFMM. We were able to use no-arbitrage to derive lower bounds on the liquidity provider loss (and, conversely, lower bounds on the edge trader's expected profit) akin to that of Glosten and Milgrom which connect curvature to the informed trader's informational edge. The results from this section provide a simple economic interpretation of curvature as the amount of information an informed trader needs to achieve a certain profit (given a fixed edge and market price). Yield farming. We then extended the definition to also include interacting markets with finite liquidity. These notions of shape or curvature could then be used to capture how a single trade on a market with finite liquidity affects prices on another market with finite liquidity. Using these definitions for curvature, we were able to bound the tracking error when an arbitrageur trades between a pair of markets with different curvatures. When specialized to CFMMs, the results presented generalize some of the results of [5,4] to the case where the market is not infinitely liquid. We then used this to analyze the yield farming phenomenon, where protocols began providing subsidies to liquidity providers. Here, we showed a lower bound to the amount of subsidy needed to pay liquidity providers to account for their 'impermanent loss' when compared to a market with bounded liquidity, which depended on the curvature of both markets and the rate of growth of the asset. Combined, these results suggest that the curvature of a CFMM needs to be optimized to avoid adverse selection while also capturing trading volume and fees related to asset price growth. Future work. This work can be extended in a number of ways. On the practical side, many of the results presented here only work in two dimensions (e.g., two asset trading). Generalizing our results to n dimensions would be a useful but potentially difficult problem. For example, it is not clear how to define µ-stability in higher dimensions for general CFMMs, without being overly restrictive. Moreover, even though we give some sufficient conditions on the curvature of a 'good' CFMM for certain applications, it is still an open question for how to take a given price process, represented, say, as an Itô process or a jump-diffusion process, and then construct a 'good' CFMM. If this were found, then one could take historical data for a crypto asset and construct an optimized CFMM for trading this asset. Finally, it is clear that dynamic CFMMs [25, 54,10,46,52], i.e., CFMMs whose trading functions vary in time according to either a stochastic or control mechanism, continuously affect the curvature of the trading function. Given the results of this paper, a natural extension to inquire about is: how should one design an optimal control mechanism to replicate a desired payoff or behavior? The results of this paper suggest that the trade-off between adverse selection and payoff growth are extremely important to such designs, especially for products with sharp payoffs or time decay (e.g., barrier options). The results of §3.3 intimate that there is such a mapping, akin to super replication results from traditional mathematical finance. Curiously, results from the mathematical finance literature also heavily use convex duality and it is likely that there are a number of fruitful translational results can be found. These results will likely give more power to the dynamic hedging results of §4 and will be useful for comparing CFMM payoffs to traditional derivatives pricing. We suspect that some of our conjectures in appendix I, regarding the superhedging of contingent claims specified by CFMM portfolio values, is likely to be a problem with deep connections to such results. A Form equivalence In [4], the trading function is defined as a function ϕ : R n + × R n + × R n + → R, which maps the reserves, input trades, and output trades to a real value. In this paper, we do not make an explicit distinction between the input and output trades; these are, instead, specified by the sign of the trade amounts ∆ and ∆ . In this case, we can make the following equivalences between the trade (∆, ∆ ) (the notation as used in this paper) and the input trade ∆ 0 ∈ R 2 + and output trade Λ 0 ∈ R 2 + as used in [4], for n = 2: while the reserves are simply R 0 = (R, R ). We then have as expected. The update equations remain identical, since R 0 ← R 0 + ∆ − Λ is equivalent to R ← R − ∆ and R ← R + ∆ , which means that all results from [4] hold as stated. B Lower bounds for portfolio value with fees The analysis of CFMMs with fees is often much harder than the analysis of fee-less CFMMs. This construction gives a simple lower bound which shows that it often suffices to consider a fee-less CFMM with fees taken separately; i.e., it often suffices to consider a CFMM where the fee is not reinvested into the reserves, but is instead given to LPs directly. Statement. For simplicity, we will use the notation from [4], which results in a simple proof for any number of coins n. (See appendix A.) Let ϕ : R n + × R n + × R n + → R be a trading function for a CFMM that can be written as (with some slight abuse of notation): where R 0 ∈ R n + are the reserves, ∆ 0 ∈ R n + is the input trade, and Λ 0 ∈ R n + is the output trade. We will assume that ϕ is increasing in its arguments, and let 0 < γ ≤ 1 such that (1 − γ) is the fee taken for the CFMM. (See, e.g., [4, §2.3.1].) In this case, we will consider the resulting portfolio value of an LP at some cost vector c ∈ R n + . We will then show that the portfolio value of the LP after any feasible trade (∆ 0 , Λ 0 ) is at least as large as the equivalent portfolio value at the previous reserves with an extra factor of (1 − γ)c T ∆. Proof. The proof is nearly immediate. Let R 1 = R 0 + ∆ 0 − Λ 0 be the post-trade reserves and R 0 be the pre-trade reserves, then where R is the solution to the fee-less portfolio-value problem [4, §2.5], with variable R ∈ R n + . Note that the second inequality follows since, by definition, (∆ 0 , Λ 0 ) is a feasible trade only when and so R = R 0 +γ∆ 0 −Λ 0 is a feasible point for the portfolio-value problem (18). Repeatedly applying this statement to any number of feasible trades (∆ k , Λ k ) yields the following lower bound for the portfolio value at time k: In many cases, because we are interested in finding a lower bound to the portfolio value of liquidity providers, it will often suffice to use this statement in order to achieve a reasonable lower bound. This allows us to side-step the potentially very complicated analysis of CFMMs with fees and the fees' interactions with the CFMM's reserves. C Relationship between g and curvature of ψ From equation (4), we can write, Given a CFMM invariant function, given initial reserves, ψ(∆, ∆ ), we can write: Therefore, g(∆) = ∂ψ ∂∆ −1 ∂ψ ∂∆ . Thus µ-stability condition, for sufficiently smooth g, relies on the first derivative of g: From [30, Prop. 3.1], we see that for an implicit function F : R 2 → R, the Gaussian curvature κ F is defined as, where ∂ i F is the ith partial derivative of F . Using implicit substitution (e.g., writing ∆ (∆)) and substituting it into eq. (19)), we can see that the two formulas are equivalent. D Curve's price impact function is convex In this section, we claim that if the following assumption on the reserve sizes holds, then Curve's impact function is convex: Recall that for Curve, the trading function is defined for α, β > 0 as, From [50,Prop. 3.8], a curve defined via a sufficiently smooth implicit function F (x, y) = 0 is convex if and only if its Gaussian curvature (i.e., equation (19)) is non-negative. Therefore, the claim of convexity is equivalent to showing that κ ψ ≥ 0. Since the denominator of equation (19) is non-negative, we only need the numerator to be positive, e.g.,: Using the definition of ψ, we have As (R − ∆) 5 (R + ∆ ) 5 ≥ 0, combining results with (21) yields the positivity condition Let A = α(R − ∆) 2 (R + ∆ ) and B = α(R − ∆)(R + ∆ ) 2 . Then this condition can be rewritten as Dividing through by (A − β)(B − α) gives the final condition Provided that A > β and B > α, then this condition is always true. The first condition is equivalent to (R − ∆) 2 (R + ∆ ) > α β and the second condition is equivalent to (R − ∆)(R + ∆ ) 2 > β α . Combining these gives the condition (R − ∆)(R + ∆ ) > 1. Minorizing R − ∆ by R is equivalent to the assumption (20), proving the claim. E Portfolio Greeks for n coin CFMM In the n coin scenario, the portfolio value [4] under no arbitrage is: where R ∈ R n + is the reserve for each asset and we assume asset n is the numéraire, such that m n = 1. Implicit differentiation of Ψ gives, Dividing through by ∂ n Ψ(R) gives This yields Note that when m j ≈ 1, which is what happens in stablecoin-stablecoin trading, P ∆ and P Γ resemble the two-asset trading hedges. F Conjecture: Delta hedging impermanent Loss The linear bounds of (17) suggest that one can hedge P ∆ if d∆ dm and d∆ dm can be replicated using options on impact. We note that the results presented here do not hold as stated since the assumption that the market price is decreasing, used in the definitions of µ and κ, is broken here. We suspect that more general results of this form do hold in practice, but leave this as an open question. First, note that d∆ dm is the inverse of the price impact function dm d∆ . Suppose that an LP wants to hedge their impermanent loss when the realized price impact is greater than a fixed quantity ξ. For instance, an LP in a stablecoin pool might believe that the price of the assets never deviate by more than 10% and fixed ξ = 0.1. Mathematically, ξ corresponds to an lower bound on the impact, e.g. dm d∆ ≥ ξ. Recall that the Carr-Madan expansion [13, App. 1] says that any payoff f ∈ L 2 (R + ) can be expressed, for κ > 0 as Consider the function f (F ) = 1 F 1 F >ξ+ ∈ L 2 (R + ) for some > 0. Using the Carr-Madan expansion at F = dm d∆ and κ = ξ, we have: This equation states that we can replicate the exposure d∆ dm 1 d∆ dm ≥ξ by holding a portfolio of 2 K 3 call options at strike K for all K ∈ [ξ, ∞]. Thus to hedge, we short this portfolio. Using put-call parity, this is equivalent to holding a quantity of the asset α and selling put options at strike prices K greater than our cutoff, weighted by 2 K 3 . This effectively provides a way for an LP to insure themselves against impermanent losses up to price impacts of size ξ by selling a portfolio of put options. G Multiperiod informed trading Suppose that we have a discrete time series of probabilities α t ∈ [1/2, 1). At time t, an informed trader has probability α t of predicting whether the true price m(t + 1) at time t + 1 is equal to the trader's predicted price p t+1 . Associated to each α t is a pair of quantities ∆(t), ∆ (t) ∈ R that represent the trade quantities needed to take the true price m(t) to the predicted price p t+1 . We can compute these quantities implicitly via the following differential equation implied by (4): To execute this in practice, an informed trader would need compute the optimal quantities ∆ * , ∆ * that satisfy these questions. However, given that Ψ can be quite complicated, such a trader would likely have to rely on approximate methods. In particular, it is likely that greedy methods like gradient descent would be employed given that there are latency constraints in realistic settings. How would we compute these optimal quantities to trade? To utilize a local method like gradient descent, we first need to compute an objective function h(∆, ∆ ) that we minimize using ∂ ∆ h, ∂ ∆ h. In light of the above equation, is natural to define the following objective function Let us assume that the informed trader is computationally constrained and attempts to approximate ∆, ∆ via gradient descent: This recursion is the gradient descent-ascent algorithm (GDA) [38] or forward-backward iteration [16]. Note that due to the natural constraint of increasing one quantity while reducing another, we take gradients in the opposite directions for each quantity. Moreover, this is how a number of CFMMs compute trade quantities in practice (e.g., Curve, where it has been the source of a number of vulnerabilities [24,53]). One natural question to ask is: how many time steps n does an informed trader need to run this algorithm to compute the optimal trade quantities up to some prescribed error? It turns out that this is directly related to the curvature of the CFMM. Suppose that this algorithm is run for a maximum of T > 0 time steps. Then we can write a recurrence for the expected reserve quantities as a function of time: E[R(t + 1)|R(t)] = R(t) + α t ∆(t) T + (1 − α t )∆(t) (24) E[R (t + 1)|R (t)] = R (t) + α t ∆ (t) T + (1 − α t )∆ (t) where the∆ are the quantities that would need to trade if an oracle provided the optimal quantities to move from m(t) to m(t + 1). In particular, the quantities∆,∆ are the noarbitrage quantities that are traded when a roundtrip trade is made by an arbitrageur and the LP books a profit (akin to eq. (7)). If the informed trader has a very high amount of edge or information (e.g. lim inf t→∞ α t = 1), then (24) is completely dominated by the GDA time steps of the informed trader. From [38, Theorem 1], we find the result that when the GDA steps dominate the evolution of (24), then if T = Ω µ 2 κ , the expected reserves will be very close to the optimal reserves ∆ * , ∆ * , e.g. ∀ > 0, ∃t ( ) ∈ O(T ) such that |∆(t) T +t ( ) − ∆ * | < In particular, [38] uses effectively the same definition of curvature for solving minimax problems using GDA. We note that their results depend weakly on dimension, so that this recursion can be extended to informed traders interacting with n-asset CFMM markets. This illustrates that the curvature of a CFMM also controls the computational complexity that an informed trader needs to act on perfect information (e.g., α t ≈ 1) and trade with a CFMM. This formulation was inspired by constructions in robust machine learning that resemble adversarial information aggregation in a CFMM. We conjecture that this provides a way to extend some of our results to higher dimensions. H Optimal Balancer yield farming Suppose that we have two Balancer pools consisting of two assets that have the same spot price p at time t . Pool i for i ∈ {1, 2} has reserves R i , R i ∈ R + and exponent τ i ∈ (0, 1) so that the trading function for the ith pool is By definition, the spot price equivalence means that: Let p i (t) be the price of the ith pool at time t so that p 1 (t ) = p 2 (t ) = p. Assume that a liquidity provider owns the same fraction b ∈ [0, 1] of each pool and that at time t + 1 a trade of size ∆ from x to y occurs. In the feeless case, this gives a quantity change∆ i to each pool, where∆ i is implicitly specified via the CFMM constraints Let P V (R i , R i , t) be the portfolio value of pool i at time t in units of x. Then we have, This implies that if R 1 = R 2 , then τ 1 > τ 2 implies P V (R 1 , R 1 , t + 1) < P V (R 2 , rb 2 , t + 1). Less formally, this says that losses to portfolio value, given an equal reserve of x are higher for less sharply curved Balancer pool (e.g., τ 1 loses more than τ 2 ). If we wanted to incentivize liquidity in pool τ 1 by issuing new x to liquidity providers, how much would we have to pay? This value is exactly equal to P V (R 2 , R 2 , t + 1) − P V (R 1 , R 1 , t + 1), which represents the 'excess loss' covered by printing token α (we're assuming that's the portfolio numéraire). Using this equation we have: Thus, if δ(t +1) is paid out pro-rata to holders via a Synthetix-like yield farming mechanism, we can encourage users to stay in the worse pool. This is an improved formula over those used by pro-active market makers, such as Dodo [9], which don't try to perform this accounting on a trade-by-trade basis and instead rely on an oracle. You can effectively force the user to lock liquidity into pool 1 from height t to t + k and then pay out k i=0 δ(t + i) upon redemption. I Conjecture: yield farming is superhedging Before making the comparison to superhedging in discrete time [14, §3.4], we will demonstrate that equations (13) and (14) can be recast as an optimization problem. Recall that for a function f , g is a subgradient of f at x, e.g., g ∈ ∂f (x) if ∀y ∈ dom(f ), f (y) − f (x) ≥ g T (y − x). Let P ϕ (R, c) be the portfolio value for an LP at price p, reserves R for trading function ϕ. Equation 13 can be restated in terms of portfolio value as, Let f (p) = P ϕ (R, p). Then this condition is equivalent to − µ κ ∈ ∂f (m e 0 ). Similarly, equation (14) corresponds to showing any admissible subsidy R must satisfy 0 ∈ R + ∂f (m e 0 ). This connects the subsidy to an optimization problem, as optima for a function f are found by showing that 0 ∈ ∂f [14, §1.3]. These subgradient conditions illustrate that there is a connection between the yield farming subsidy and optimizing changes in portfolio value. Since the change in portfolio value is equivalent to solving a dual optimization problem [4], this shows that yield farming subsidies are equivalent to bounding the payoff an LP engenders under an arbitrage trade by a simpler curvature dependent payoff. In mathematical finance, bounding complex payoffs with simpler payoffs is known as superhedging [14, §3.4]. Suppose, instead, that we start with the opposite problem of trying to find a CFMM trading function Ψ that is equivalent to given CFMM trading function ϕ and a subsidy R . Another way to arrive at equation (13) is to consider the set of CFMMs that can have a valid portfolio value at (R + (R , 0), p) and find an upper bound for this portfolio value. More formally, suppose that we define the following set of trading functions: P(R, R , R , m , µ, κ) = {ϕ : (R + R , R , m ) ∈ dom(P ϕ ), ϕ is µ-stable and κ-liquid} We can consider P(R, R , R , m , µ, κ) to be a subset of the set of closed, proper, l.s.c. convex functions [4]. If we can find a quasiconvex function ψ such that ψ = sup P(R, R , R , m , µ, κ) (29) then ψ must satisfy (13) as per the discussion in §3.3. With some slight modifications, trading functions can be put in bijective correspondence with portfolio values, which are contingent claims. Finding a contingent claim whose payoff is the supremum over a set of admissible contingent claims is known as a superhedge [14, §3.4], [17]. Both [14,17] map superhedging to a convex dual problem analogous to finding portfolio value that is dual to (29). These formulations are very similar to the relationship between trading function and portfolio value from [4]. While traditional superhedging involves taking a supremum over a set of equivalent martingale measures, we are instead taking a supremum over a set of contingent claims whose curvatures are constrained. There is some literature on superhedging over sets of contingent claim payoffs that are not equivalent martingale measures (e.g., [49]), however, none of the dual frameworks known to the authors map cleanly to our definitions of curvature. We conjecture a curvature claim result analogous to [17,Prop. 2.3] exists.
2020-12-16T02:15:38.246Z
2020-12-15T00:00:00.000
{ "year": 2020, "sha1": "3ef3e42c148961a92be129d2ed8f630c0e7fcf7b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3ef3e42c148961a92be129d2ed8f630c0e7fcf7b", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
53083971
pes2o/s2orc
v3-fos-license
Cost-effectiveness of exercise referral schemes enhanced by self-management strategies to battle sedentary behaviour in older adults: protocol for an economic evaluation alongside the SITLESS three-armed pragmatic randomised controlled trial Introduction Promoting physical activity (PA) and reducing sedentary behaviour (SB) may exert beneficial effects on the older adult population, improving behavioural, functional, health and psychosocial outcomes in addition to reducing health, social care and personal costs. This paper describes the planned economic evaluation of SITLESS, a multicountry three-armed pragmatic randomised controlled trial (RCT) which aims to assess the short-term and long-term effectiveness and cost-effectiveness of a complex intervention on SB and PA in community-dwelling older adults, based on exercise referral schemes enhanced by a group intervention providing self-management strategies to encourage lifestyle change. Methods and analysis A within-trial economic evaluation and long-term model from both a National Health Service/personal social services perspective and a broader societal perspective will be undertaken alongside the SITLESS multinational RCT. Healthcare costs (hospitalisations, accident and emergency visits, appointment with health professionals) and social care costs (eg, community care) will be included in the economic evaluation. For the cost-utility analysis, quality-adjusted life-years will be measured using the EQ-5D-5L and capability well-being measured using the ICEpop CAPability measure for Older people (ICECAP-O) questionnaire. Other effectiveness outcomes (health related, behavioural, functional) will be incorporated into a cost-effectiveness analysis and cost-consequence analysis. The multinational nature of this RCT implies a hierarchical structure of the data and unobserved heterogeneity between clusters that needs to be adequately modelled with appropriate statistical and econometric techniques. In addition, a long-term population health economic model will be developed and will synthesise and extrapolate within-trial data with additional data extracted from the literature linking PA and SB outcomes with longer term health states. Methods guidance for population health economic evaluation will be adopted including the use of a long-time horizon, 1.5% discount rate for costs and benefits, cost consequence analysis framework and a multisector perspective. Ethics and dissemination The study design was approved by the ethics and research committee of each intervention site: the Ethics and Research Committee of Ramon Llull University (reference number: 1314001P) (Fundació Blanquerna, Spain), the Regional Committees on Health Research Ethics for Southern Denmark (reference number: S-20150186) (University of Southern Denmark, Denmark), Office for Research Ethics Committees in Northern Ireland (ORECNI reference number: 16/NI/0185) (Queen’s University of Belfast) and the Ethical Review Board of Ulm University (reference number: 354/15) (Ulm, Germany). Participation is voluntary and all participants will be asked to sign informed consent before the start of the study. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement number 634 270. This article reflects only the authors' view and the Commission is not responsible for any use that may be made of the information it contains. The findings of the study will be disseminated to different target groups (academia, policymakers, end users) through different means following the national ethical guidelines and the dissemination regulation of the Horizon 2020 funding agency. Use of the EuroQol was registered with the EuroQol Group in 2016. Use of the ICECAP-O was registered with the University of Birmingham in March 2017. Trial registration number NCT02629666; Pre-results. Promoting physical activity (PA) and reducing sedentary behaviour (SB) may exert beneficial effects 4 on the older adult population, improving behavioural, functional, health and psychosocial outcomes 5 in addition to reducing health, social care and personal costs. This paper describes the planned 6 economic evaluation of SITLESS, a multi-country three-armed pragmatic randomized controlled trial 7 (RCT) which aims to assess the short and long term effectiveness and cost-effectiveness of a complex 8 intervention on SB and PA in community dwelling older adults, based on exercise referral schemes 9 (ERS) enhanced by a group intervention providing self-management strategies (SMS) to encourage 10 lifestyle change. 11 12 Methods and analysis 13 A within trial economic evaluation and long-term model from a National Health Service / Personal 14 Social Service perspective and a broader societal perspective will be undertaken alongside the 15 SITLESS multinational RCT. Health care costs (hospitalisations, accident and emergency visits, 16 appointment with health professionals) and social care costs (e.g. community care) will be included 17 in the economic evaluation. For the cost-utility analysis, Quality-adjusted life years (QALYs), will be 18 measured using the EQ-5D-5L and capability wellbeing measured using the ICECAP-O questionnaire. 19 Other effectiveness outcomes (health-related, behavioural, functional) will be incorporated into a 20 cost-effectiveness analysis and cost-consequences analysis. 21 The multi-national nature of this RCT implies a hierarchical structure of the data and unobserved 22 heterogeneity between clusters that needs to be adequately modelled with appropriate statistical 23 and econometric techniques. In addition, a long-term population health economic model will be 24 developed and will synthesise and extrapolate within-trial data with additional data extracted from 25 the literature linking PA and SB outcomes with longer term health states. 26 Methods guidance for population health economic evaluation will be adopted including the use of a 27 long time horizon, 1.5% discount rate for costs and benefits, Cost Consequence Analysis framework, 28 and multi-sector perspective. 29 Ethics and dissemination 30 The study design was approved by the Ethics and Research Committee of each intervention site: The 31 Ethics and Research Committee of Ramon Llull University (reference number: 1314001P) (Fundació 32 Blanquerna, Spain), • First economic evaluation of a complex, public health intervention to improve health and 14 capability outcomes of community dwelling, insufficiently active older adults. 15 • Economic evaluation in a multi-country setting will need appropriate sensitivity of results to 16 the costing methodology and the econometric approach to deal with cross-country 17 heterogeneity. 18 • The protocol will provide useful guidance to design the economic evaluation of a complex 19 public health intervention in multi-country settings. 20 • Economic evaluation will be reported incorporating a broad set of preference-based health 21 and capability outcomes as well as effectiveness outcomes using cost-utility, cost-22 effectiveness and cost-consequence analysis. 23 • While considering SB alongside PA represents a strength over existing literature, long-term 24 modelling will need to rely on assumptions to combine PA and SB and validation of this may 25 not be possible until further evidence emerges. 26 28 An insufficient level of physical activity and prolonged sedentary behaviour (PA and SB, respectively, 29 henceforth) are associated with an increased risk of developing major diseases (e.g. breast and colon 30 cancer, type II diabetes, obesity and depression). Particularly, in the last decade, growing evidence 31 Economics of inactivity and sedentary behaviour indicates that excessive sitting time may be harmful to health, independent of meeting the 32 recommended physical activity guidelines. 33 34 PA and SB represent large costs to the healthcare system and society more broadly. In England, the 35 cost of physical inactivity among the general population (direct costs related to chronic diseases and 36 indirect costs related to the loss of productivity associated to mood and anxiety disorders) has been 37 estimated to be 8.3 billion Pounds per year [1] whereas in Europe that estimate equated 80.4 billion 38 Euro in 2012 (6.2% of total healthcare expenditure across the EU-28). In this regard, reducing 39 inactivity by 20% among the adults population would result in a cost saving of 16.1 billion Euro [2]. 40 The burden of an inactive lifestyle is predicted to be increasing for older adults, which represent the 1 fastest growing segment of the world population [3], accounting for 30-40% of total healthcare 2 spending across Europe [4]. 3 The increase in the percentage of the total population who are older adults will be accompanied by 4 an increase in the incidence of diseases associated with old age such as cardiovascular disease, 5 cancer, type 2 diabetes, accidental falls, obesity, metabolic syndrome, mental disorders, and 6 musculoskeletal diseases [5]. Furthermore, the frailty associated with old age constitutes an 7 additional risk factor for adverse health outcomes (falls, hospitalisation, disability and death) [6]. 8 Maintaining or engaging in a physically active lifestyle and reducing SB may result in attenuating 9 cognitive and functional decline over time, alleviating the symptoms of various chronic conditions 10 associated to old age [7] and preventing or even reversing frailty [8 9]. 11 The substantial economic impact of an inactive lifestyle raises the need for a health economic 12 evaluation aiming at investigating the cost-effectiveness of interventions to promote active lifestyles 13 to reduce the likelihood of developing diseases and disability associated with old age and preventing 14 them. 15 16 Interventions to reduce SB or a lack of PA: economic evaluation evidence 17 Evidence regarding the cost-effectiveness of public health interventions directed towards the 18 increase of PA and reduction of SB is typically characterized by a substantial heterogeneity regarding 19 the type of implemented intervention and the target population [ Garrett,et al. [11] reported the results of a systematic 28 review of community-based interventions directed towards the improvement of PA, finding that 29 most interventions, especially those not requiring direct supervision, were cost-effective [23]. Pavey,30 et al. [12] found that ERS interventions were cost-effective only in inactive but healthy populations 31 [10]. Vries,et al. [22] evaluated the cost-effectiveness of a patient-centred physical therapy strategy 32 with tailored motivational and coaching sessions and physical training directed towards individuals 33 over 70 years old with mobility problems; they found that the intervention was effective in 34 increasing PA and reducing frailty and provided good value for money [20]. 35 Poor adherence and lack of long-term commitment have been identified as the main challenges of 36 ERS interventions, thus suggesting scope for behavioural interventions [10 12]. However, only few 37 studies evaluate such interventions [14 19]. Furthermore, there is a lack of evidence regarding the 38 long-term effectiveness and cost-effectiveness of interventions to increase PA and reduce SB [12]. 39 40 The SITLESS Intervention 41 The SITLESS study is a multi-national, multi-centre, three-armed randomised controlled trial (RCT) 42 investigating the short and long-term effectiveness and cost-effectiveness of a complex intervention 43 to increase PA and reduce SB in older adults from four European countries. The cost-effectiveness of 44 F o r p e e r r e v i e w o n l y 6 a joint intervention of Exercise Referral Scheme (ERS) and Self-Management Strategies (SMS) will be 1 evaluated compared to two alternatives: ERS alone and usual care (UC, henceforth). Full details of 2 the RCT protocol are reported elsewhere [26]. 3 ERS have become one of the most widely used instruments to promote PA [12 27]. In an ERS 4 intervention, individuals -usually insufficiently active or affected by specific diseases which might 5 benefit from PA -are assigned to a primary care or to an exercise facility, which design and monitor 6 a tailored exercise program. However, ERS are not usually focused to reduce SB [18 28-30] and 7 evidence of ERS effectiveness relates to the short term and to specific subgroups of individuals (e.g. 8 overweight adults, or individuals who are already slightly active [31]) and hence it is not 9 generalizable to the older population of interest in the SITLESS trial. Furthermore, evidence 10 regarding the effectiveness and cost-effectiveness of ERS compared to alternative interventions (e.g. 11 standard advice) is limited [27 32]. 12 Individual commitment towards PA is driven by behavioural, demographic and socio-economic 13 (possibly country-specific) factors. Given this, it is likely that the behavioural intervention in the 14 form of SMS is anticipated to modify individual behaviour more effectively than ERS or usual care. 15 Furthermore, SMS might exert an incremental benefit -in terms of increased PA and reduction of 16 SB-with respect to ERS alone, in terms of enhanced motivation to sustain the behaviour change over 17 the long term, thus overcoming problems related to the limited uptake and low adherence to the 18 program which are usually associated with ERS [16 17]. 19 20 The SITLESS RCT enhances the PA intervention with a SMS intervention based on behavioural change 21 techniques, encompassing a range of components: behavioural goal setting, self-monitoring of 22 progress and social support among peers and the existing network, external monitoring, problem 23 solving, environmental signposting. The SMS intervention targets physical activity and sedentary 24 behaviour with distinct, through related, techniques [26]. 25 26 This paper describes the protocol for the economic evaluation alongside the SITLESS RCT. The aim is 27 to determine whether enhancing ERS by SMS is a cost-effective strategy and provides good value for 28 money. In addition, this economic evaluation protocol will outline the additional challenges posed by 29 the multi-country nature of the study, describing the proposed methodologies to deal with the 30 identification, measurement and valuation of costs and outcomes. The health economics logic model 31 (Appendix 1) illustrates the linkage between the resources used and the outcomes of interest related 32 to the SITLESS intervention. Following good practice for the design of economic evaluations alongside RCTs [33], data collection 4 instruments were designed in collaboration with the trial team to collect information on the cost of 5 the ERS and SMS intervention, resources used by patients (e.g. usage of medical, social and 6 community services) and preference-based QOL and capability outcomes, at baseline and over the 7 trial follow-up (12 and 18 months post intervention) considering a health and social service 8 perspective and a broader social perspective. While the SITLESS complex intervention is 9 standardised, in the design of the data collection instruments were tailored to each country context 10 (e.g. inclusion of country-specific examples of community/social services). 11 The economic evaluation will follow the most recent guidance for the economic evaluation of public 12 health interventions NICE [34], as well as CHEERS guidelines for reporting results [35]. 13 Study population 14 The target population is community-dwelling older adults who fulfil the following criteria: aged 65 or 15 above; able to walk independently for at least 2 minutes; have no major physical limitations (i.e. 16 obtaining <4 in the Short Physical Performance Battery); and who are insufficiently active (perform 17 regular physical activity) for at least 30 minutes five or more days of the week). All individuals will be 18 recruited according to country-specific primary prevention pathways. Overall, according to the 19 sample size estimation, 1,338 individuals will be recruited for the trial (446 per group). 20 21 Setting and location 22 The SITLESS trial is a multi-country, multi-centre trial. The intervention will be delivered in Primary 23 care or community settings in four sites: Barcelona (Spain), Odense (Denmark), Ulm (Germany) and 24 Belfast (UK). 25 A multi-country RCT has benefits in terms of higher statistical power and generalizability of the 26 economic results [36 37]. However, the multinational nature of SITLESS occasions substantial cross-27 country heterogeneity in terms of: demographic structure (i.e. morbidity and mortality patterns, 28 ageing structure), differences in health care systems (e.g. payment systems, health provider 29 incentives); differing unit cost sources; differing availability of health care services and clinical 30 practices [38]; individual attitudes towards PA (personal motivation, health and mobility issues, 31 genetic factors); social and physical environments or cultural differences in behaviour and 32 preferences of participants (e.g. local opportunities to do PA, social gathering etc.). 33 For all the reasons mentioned above, identifying and accounting for cross-country heterogeneity is a 34 key issue for the economic evaluation of SITLESS. The cost effectiveness of a joint intervention of ERS and SMS will be compared (18 months) to two 1 control groups: 1) ERS alone; 2) usual care (i.e. written general booklet standardised across sites, 2 including the WHO recommendation regarding PA regular practice for health, andtwo sessions on 3 healthy ageing regarding fall prevention and healthy nutrition). 4 5 Study perspective 6 The economic evaluation will be conducted from a health service and personal social services 7 perspective as recommended by UK's NICE guidelines [34]. To this end, health and non-health care 8 costs (and cost savings), incurred by both the provider and the participant will be considered. 9 According to NICE guidelines, that suggest emphasising overall welfare, rather than health per se, a 10 cost-effectiveness analysis from a personal social service perspective and a cost-consequence 11 analysis adopting a broader societal perspective may be performed as well. 12 13 Time horizon 14 The assessment of the primary within-trial economic analysis will be conducted at baseline, post-15 intervention and at 12 and 18 months follow-up. The second part of the economic analysis will 16 extrapolate the cost-effectiveness results beyond the 18-months follow up in the clinical trial, in 17 order to explore the potential lifetime cost-effectiveness of the SITLESS intervention. 18 19 Discount Rate 20 Following NICE public health economic evaluation guidelines [34], a discount rate of 1.5% will be 21 employed and sensitivity analysis will explore the impacts of rates of 3.5% and 6%. 22 23 Measures of Outcome 24 The measures of outcome employed in the economic evaluation and the timing of their collection 25 are presented in Table 1. While QALYs are the main outcome measure to be used in a cost-utility 26 analysis (CUA) framework, cost-effectiveness analysis (CEA) and cost-consequence analysis (CCA) will 27 make use of the broader set of outcomes collected within trial. 28 QALYs will be estimated using the EQ-5D-5L [39] and capability wellbeing estimated using the 29 ICECAP-O [40]. Outcomes will be assessed at baseline, month 4 (end of ERS intervention), month 16 30 (12 months post intervention) and month 22 (18 months post intervention).The EQ-5D-5L focuses on 31 health attributes while the ICECAP-O instrument will assess capability wellbeing (according to Sen's 32 capability theory[41]), thus incorporating both health and non-health dimensions [42]. The EQ-5D-5L questionnaire measures health-related quality of life (HRQOL) in terms of five 2 dimensions (mobility, self-care, daily activities, pain and discomfort, anxiety and depression) in a 1-5 3 scale. It also includes a visual analogue scale on which patients rate their own health between 0 4 (best imaginable health state) and 100 (worst imaginable health state). Assigning weights to each 5 response of the five dimensions, it is possible to generate a synthetic index that will summarize the 6 health-related quality of life at the individual level. EQ-5D has been used by several studies 7 examining the cost-effectiveness of ERS as a measure of HRQOL [15 17 43]. 8 EQ-5D utility scores will be derived using UK-tariffs. In consideration of the multinational aspect of 9 the analysis, the quality adjustments weights for each health state at different periods should be 10 obtained by using country-specific EQ-5D tariffs, which reflect country-specific differences in health 11 perceptions and preferences and might significantly affect cost-utility analysis [44][45][46]. However, so 12 far country-specific sets of tariffs for the EQ-5D-5L have not been directly elicited in any of the 13 SITLESS countries with a validated procedure (value sets for England only are available) and in line 14 with a recent NICE position statement [47] we will make use of the "crosswalk" procedure, 15 developed by The EuroQol group to link the EQ-5D-5L and EQ-5D-3L. Crosswalk value sets for the 16 EQ-5D-5L are currently available for all the countries participating in the SITless study [48]. 17 The utility value derived from the EQ-5D-5L questionnaire will be used to derive QALYs using 18 standard area under the curve methods, eventually adjusted for group-specific differences in 19 baseline utility [49]. 20 ICECAP-O 21 The ICECAP-O instrument measures capability wellbeing using five questions capturing each one 22 capabilities dimension (attachment, security, role, enjoyment, control), measured in a 1-4 scale. As 23 for the availability for the SITLESS study, English, Spanish and German translations were available, 24 while the questionnaire has been translated for the first time in Danish for the purpose of the 25 SITLESS study. Given that country-specific tariffs are not available for all the countries in the SITLESS 26 trial, the ICECAP-O utility scores were derived using UK-tariffs. 27 28 Measures of Resource Use and Cost 2 The costs of providing and administering the SITless intervention will be identified and measured 3 alongside potential cost impacts, thus taking into account both costs incurred as well as cost savings 4 In line with the NHS and personal social services perspective, two sources of 1 costing have been taken into account. 2 First, the costs borne by the primary care/exercise facility to deliver the SITLESS intervention and the 3 control are considered. The SMS Intervention is tailored to each individual, thus requiring the 4 collection of individual-specific costs (e.g. duration of the contact, staff present, transport costs 5 sustained by staff and participants) using the SMS Intervention Cost Log. Average costs (e.g. 6 equipment used during the SMS sessions; refreshments) which are not likely to change across 7 individuals will be captured as well. 8 Unlike the SMS, the delivery of the ERS and UC interventions are entirely standardised. Thus, the 9 associated cost can be regarded as uniform and not as an individual-specific cost. Therefore, the 10 data collection instruments (ERS and UC Cost Logs) aims at capturing the average cost sustained 11 when delivering these interventions. A sample of the data collection instruments can be provided 12 upon request. 13 Second, the estimated cost of resource use will be derived through a bottom-up exercise, following 14 similar studies [12 15] and collected through a questionnaire on the use of exercise facilities as well 15 as health and social services facilities in the last three months. 16 17 Evaluating costs in multinational trials requires handling a non-negligible amount of between-18 country heterogeneity that needs to be tackled appropriately in order to allow for comparability. 19 20 Unit prices need to be converted into a common currency (Euro) by use of Purchasing Power Parity 21 (PPP) statistics reported by the OECD for a base year to allow proper comparison [36]. Furthermore, 22 following a multicountry costing approach, unit cost estimates from each country will be used to 23 evaluate the resources used in these countries but sensitivity analysis using UK unit prices will be 24 performed as well. Indeed, while systematic reviews provide mixed evidence on the most used 25 costing method [51-53], using country-specific unit costs is a common and recommended practice to 26 evaluate resources in multinational RCTs [54 55]. However, recent ISPOR guidelines cast doubts on 27 the superiority of the multi-country approach[54], arguing that a multicountry costing may not be an 28 effective strategy to adjust for cross-country heterogeneity. 29 30 An overview of resource use and cost measures to be employed in the economic evaluation is 31 presented in Table 2, while Appendix 2 provides a summary of the unit cost sources that will be 32 used to valuate resource use. The within trial economic analysis will establish the expected cost-effectiveness of SMS+ERS 2 compared to ERS alone and UC through a number of different analyses. 3 The main within-trial analysis will be a CUA, which will calculate the incremental cost per QALY 4 (calculated using EQ-5D) or capability wellbeing (measured as Years of Full Capability, calculated 5 using ICECAP-O) of the SITLESS intervention vs. both the control groups. In addition, the cost per unit 6 of increased PA or reduction in SB will be calculated using a cost-effectiveness framework. 7 Furthermore, a CCA framework will be also implemented. Given the complex nature of the SITLESS 8 intervention, it's likely that all the relevant benefits of the intervention will not be captured by a 9 single utility measure or a single outcome measure. To this end, the CCA framework would facilitate 10 the presentation of a wider battery of outcomes collected within the SITLESS trial. Table 3 below 11 shows the health economics framework (CUA, CBA or CCA), and the related outcome measures, 12 perspectives and format for presenting results. 13 The multi-country nature of the SITLESS intervention implies that cost and outcome data fall 14 naturally in a hierarchical structure, meaning that multiple "micro- 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 F o r p e e r r e v i e w o n l y 19 1 Addressing Uncertainty 2 Deterministic and stochastic sensitivity analysis will be performed to measure uncertainty around 3 parameters considered to be key drivers of the cost-effectiveness of the SITLESS intervention. 4 The deterministic, one-way sensitivity analysis will examine the impact that changes in the discount 5 rate, unit costs and utility weights would have on the main economic evaluation results. Specifically 6 we will examine the impact of the assumptions regarding resource use and outcome valuation in a 7 multi-country setting, including: a) multi-country/one-country(UK unit costs) costing approach; b) 8 country specific utility weights derived through the 'crosswalk' procedure/UK-based EQ-5D. When 9 appropriate, a tornado diagram may be used to explore the effect of a percentage change in each of 10 the key model parameters on the main outcome. 11 12 A two-way sensitivity analysis will explore the joint variation of cost and utility weights around a 13 range identified by the one-country and multi-country scenario, and will assess how the Incremental 14 Cost-Effectiveness Ratio (ICER) changes in the "extreme" cases. In addition, sensitivity to the 15 econometric specification used to model cross-country data clustering will be examined and taken 16 into account. Further sensitivity analysis might be required depending on the distributional 17 assumptions regarding cost and outcomes, as well as regarding the presence of outliers. 18 Probabilistic Sensitivity Analysis (PSA) will also be undertaken. PSA has the advantage of indicating 19 the probability of a technology being cost-effective at various thresholds of willingness to pay (WTP). 20 A high probability of being cost-effective should lead to a more positive outcome in a technology 21 appraisal, whereas the opposite should apply for a low probability [60]. 22 23 Using Monte Carlo simulation, a bootstrapped distribution of costs and QALY will be generated and 24 incremental costs and QALY will be shown in a cost-effectiveness plane. Cost-effectiveness 25 acceptability curves (CEACs) will graphically represent the probability that the intervention is cost-26 effective compared to the controls across a range of cost-effectiveness thresholds. Representing the 27 uncertainty of ICER across a range of WTP is a key issue in the economic evaluation of the SITLESS 28 intervention, given that the Willingness to Pay is likely to be country-specific, reflecting a country's 29 opportunity cost of undertaking the intervention [61]. 30 31 Missing data 32 Following best practice [54 62 63] a multiple imputation procedure using chained equations (MICE) 33 will be used to impute missing data separately for each arm of the trial and predictive mean 34 matching will allow dealing with non-normality of cost and outcome data [64]. The procedure to deal 35 with missing data will take into account additional, SITLESS-specific, reasons for missingness related 36 to the fact that motivations and barriers for providing information might be age-related (e.g. 37 physical or cognitive weakness). Furthermore, an analysis of missing data by country will be 38 performed in order to identify any country specific pattern in the probability of missingness. 39 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 interventions implemented in a specific context, while, to the best of our knowledge, this is the first 28 RCT which focus on older adults and takes the multi-country setting into account. 29 The multi-country nature of the study poses additional methodological challenges for the economic 30 analysis. However, a multinational RCT has the potential to increase the generalizability of the 31 results, thus providing the policy maker with useful guidelines on the value for money provided by 32 complex interventions such as SITLESS. Furthermore, an appropriate sensitivity analysis in the long-33 term modelling of interventions effects will provide insights on their sustainability -taking into 34 account intervention costs and adherence to intervention programs similar to SITLESS that are 35 implemented in a 'real world' context. 36 Germany). Participation is voluntary and all participants will be asked to sign informed consent 8 before the start of the study. Pending results Incremental costs and outcomes 19 For each intervention, report mean values for the main categories of estimated costs and outcomes of interest, as well as mean differences between the comparator groups. If applicable, report incremental cost-effectiveness ratios. Pending results Characterising uncertainty 20a Single study-based economic evaluation:Describe the effects of sampling uncertainty for the estimated incremental cost and incremental effectiveness parameters, together with the impact of methodological assumptions (such as discount rate, study perspective). Pending results 20b Model-based economic evaluation: Describe the effects on the results of uncertainty for all input parameters, and uncertainty related to the structure of the model and assumptions. Not applicable Characterising heterogeneity 21 If applicable, report differences in costs, outcomes, or cost-effectiveness that can be explained by variations between subgroups of patients with different baseline characteristics or other observed variability in effects that are not reducible by more information. Pending results Promoting physical activity (PA) and reducing sedentary behaviour (SB) may exert beneficial effects 4 on the older adult population, improving behavioural, functional, health and psychosocial outcomes 5 in addition to reducing health, social care and personal costs. This paper describes the planned 6 economic evaluation of SITLESS, a multi-country three-armed pragmatic randomized controlled trial 7 Discussion (RCT) which aims to assess the short and long term effectiveness and cost-effectiveness of a complex 8 intervention on SB and PA in community dwelling older adults, based on exercise referral schemes 9 (ERS) enhanced by a group intervention providing self-management strategies (SMS) to encourage 10 lifestyle change. 11 12 Methods and analysis 13 A within trial economic evaluation and long-term model from both a National Health Service / 14 Personal Social Service perspective and a broader societal perspective will be undertaken alongside 15 the SITLESS multinational RCT. Health care costs (hospitalisations, accident and emergency visits, 16 appointment with health professionals) and social care costs (e.g. community care) will be included 17 in the economic evaluation. For the cost-utility analysis, Quality-adjusted life years (QALYs), will be 18 measured using the EQ-5D-5L and capability wellbeing measured using the ICECAP-O questionnaire. 19 Other effectiveness outcomes (health-related, behavioural, functional) will be incorporated into a 20 cost-effectiveness analysis and cost-consequences analysis. 21 The multi-national nature of this RCT implies a hierarchical structure of the data and unobserved 22 heterogeneity between clusters that needs to be adequately modelled with appropriate statistical 23 and econometric techniques. In addition, a long-term population health economic model will be 24 developed and will synthesise and extrapolate within-trial data with additional data extracted from 25 the literature linking PA and SB outcomes with longer term health states. 26 Methods guidance for population health economic evaluation will be adopted including the use of a 27 long time horizon, 1.5% discount rate for costs and benefits, cost consequence analysis framework 28 and a multi-sector perspective. 29 Ethics and dissemination 30 The study design was approved by the Ethics and Research Committee of each intervention site: The 31 Ethics Strengths and limitations of this study 9 • First economic evaluation of a complex, public health intervention to improve health and 10 capability outcomes of community dwelling, insufficiently active older adults. 11 • Economic evaluation in a multi-country setting hence requires appropriate sensitivity of 12 results to the costing methodology and the econometric approach to deal with cross-country 13 heterogeneity. 14 • The protocol will provide useful guidance to design the economic evaluation of a complex 15 public health intervention in multi-country settings. 16 • Economic evaluation will be reported incorporating a broad set of preference-based health 17 and capability outcomes as well as effectiveness outcomes using cost-utility, cost-18 effectiveness and cost-consequence analysis. 19 • While considering SB alongside PA represents a strength over existing literature, long-term 20 modelling will need to rely on assumptions to combine PA and SB and validation of this may 21 not be possible until further evidence emerges. INTRODUCTION 24 Economics of inactivity and sedentary behaviour 25 An insufficient level of physical activity and prolonged sedentary behaviour (PA and SB, respectively, 26 henceforth) are associated with an increased risk of developing major diseases (e.g. breast and colon 27 cancer, type II diabetes, obesity and depression). Particularly, in the last decade, growing evidence 28 indicates that excessive sitting time may be harmful to health, independent of meeting the 29 recommended physical activity guidelines [1]. 30 31 PA and SB represent large costs to the healthcare system and society more broadly. In England, the 32 cost of physical inactivity among the general population (direct costs related to chronic diseases and 33 indirect costs related to the loss of productivity associated to mood and anxiety disorders) has been 34 estimated to be 8. 3 An increase in the percentage of the total population who are older adults will be accompanied by 41 an increase in the incidence of diseases associated with old age such as cardiovascular disease, 42 cancer, type 2 diabetes, accidental falls, obesity, metabolic syndrome, mental disorders, and 43 More broadly, an active lifestyle has the potential to increase the elderly wellbeing, in line with the 6 concept of 'active aging' and with the aim to "extend healthy life expectancy and quality of life for all 7 people as they age, including those who are frail, disabled and in need of care"[11]. 8 The substantial economic impact of an inactive lifestyle justifies the need for a robust health 9 economic evaluation to report the cost-effectiveness of interventions to promote active lifestyles to 10 reduce the likelihood of developing diseases and disability associated with old age and preventing 11 them. 12 13 Interventions instruments were designed in collaboration with the trial team to collect information on the cost of 5 the ERS and SMS intervention, resources used by patients (e.g. usage of medical, social and 6 community services) and preference-based QOL and capability outcomes, at baseline and over the 7 trial follow-up (12 and 18 months post intervention) considering a health and social service 8 perspective and a broader social perspective. While the SITLESS complex intervention is 9 standardised, the data collection instruments were tailored to each country context (e.g. inclusion of 10 country-specific examples of community/social services). 11 The economic evaluation will follow the UK's most recent guidance for the economic evaluation of 12 public health interventions NICE [39], as well as CHEERS guidelines for reporting results [40]. 13 Study population 14 The target population is community-dwelling older adults who fulfil the following criteria: aged 65 or 15 above; able to walk independently for at least 2 minutes; have no major physical limitations (i.e. 16 obtaining <4 in the Short Physical Performance Battery); and who are insufficiently active (perform 17 regular physical activity) for at least 30 minutes five or more days of the week). All individuals will be 18 recruited according to country-specific primary prevention pathways. Overall, according to the 19 sample size estimation, 1,338 individuals will be recruited for the trial (446 per group). 20 21 Setting and location 22 The SITLESS trial is a multi-country, multi-centre trial. The intervention will be delivered in Primary 23 care or community settings in four sites: Barcelona (Spain), Odense (Denmark), Ulm (Germany) and 24 Belfast (UK of several stakeholders in the project from the onset. They comprise older adults of both genders, 7 representatives of older adults' associations, primary healthcare and sport professionals, policy-8 makers and other local stakeholders of relevance (e.g. health insurance, where relevant). 9 Accordingly, four local advisory boards were created at the beginning of the project, one on each 10 intervention site (Barcelona, Odense, Belfast, Ulm) and were periodically involved in the study from 11 its onset. The development of the research question and outcome measures were shared with each 12 advisory board and therefore informed according to patients' priorities and motivations, experience, 13 and preferences. We also did a literature review that included how older adults perceive physical 14 activity and sedentary behaviour, and how could we achieve sustained changes of behaviour to 15 enhance health. 16 The involvement of stakeholders as primary, secondary and tertiary end-users in the design of the 17 study was specifically in the intervention design. We explored experiences, preferences and 18 priorities of older adults regarding behaviour change through focus groups that were convened 19 thanks to the older people organizations belonging to the local advisory boards. We took into 20 account their contributions at each site, and the main results were included in the intervention 21 design. 22 Local advisory boards also discussed and provided their contributions to the challenges faced 23 regarding recruitment, retention of participants in the study and the dissemination strategies. 24 Qualitative interviews were conducted with a purposeful sample of participants in each intervention 25 site and from each arm of the trial to explore their perceptions on the intervention. 26 Once the trial ends, we are planning on disseminating the results at each primary care center and 27 local leisure centers to end-users, health professionals and relevant stakeholders. We would like to 28 share our results to Citizen Science events, also involving participants of each site. 29 30 Study perspective 31 The economic evaluation will be conducted from a health service and personal social services 32 perspective as recommended by UK's NICE guidelines [39]. To this end, health and non-health care 33 costs (and cost savings), incurred by both the provider and the participant will be considered. 34 According to NICE guidelines, that suggest emphasising overall welfare, rather than health per se, a 35 cost-effectiveness analysis from a personal social service perspective and a cost-consequence 36 analysis adopting a broader societal perspective may be performed as well. 37 38 Time horizon 39 The assessment of the primary within-trial economic analysis will be conducted at baseline, post-1 intervention and at 12 and 18 months follow-up. The economic evaluation includes a long term 2 model to extrapolate the cost-effectiveness results beyond the 18-months within trial component. 3 4 Discount Rate 5 Following UK's NICE public health economic evaluation guidelines [39], a discount rate of 1.5% will 6 be employed and sensitivity analysis will explore the impacts of rates of 3.5% and 6%. 7 8 Measures of Outcome 9 The measures of outcome employed in the economic evaluation and the timing of their collection 10 are presented in Table 1. While QALYs are the main outcome measure to be used in a cost-utility 11 analysis (CUA) framework, cost-effectiveness analysis (CEA) and cost-consequence analysis (CCA) will 12 make use of the broader set of outcomes collected within trial. 13 QALYs will be estimated using the EQ-5D-5L 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 The EQ-5D-5L questionnaire measures health-related quality of life (HRQOL) in terms of five 2 dimensions (mobility, self-care, daily activities, pain and discomfort, anxiety and depression) in a 1-5 3 scale. It also includes a visual analogue scale on which patients rate their own health between 0 4 (best imaginable health state) and 100 (worst imaginable health state). Assigning weights to each 5 response of the five dimensions, it is possible to generate a synthetic index that will summarize the 6 health-related quality of life at the individual level. The EQ-5D has been used by several studies 7 examining the cost-effectiveness of ERS as a measure of HRQOL [17 19 48]. 8 EQ-5D utility scores will be derived using UK-tariffs. In consideration of the multinational aspect of 9 the analysis, the quality adjustments weights for each health state at different periods should be 10 obtained by using country-specific EQ-5D tariffs, which reflect country-specific differences in health 11 perceptions and preferences and might significantly affect cost-utility analysis [49-51]. However, so 12 far country-specific sets of tariffs for the EQ-5D-5L have not been directly elicited in any of the 13 SITLESS countries with a validated procedure (value sets for England only are available) and in line 14 with a recent NICE position statement [52] we will make use of the "crosswalk" procedure, 15 developed by The EuroQol group to link the EQ-5D-5L and EQ-5D-3L. Crosswalk value sets for the 16 EQ-5D-5L are currently available for all the countries participating in the SITless study [53]. 17 The utility value derived from the EQ-5D-5L questionnaire will be used to derive QALYs using 18 standard area under the curve (AUC) methods, eventually adjusted for group-specific differences in 19 baseline utility [54]. 20 ICECAP-O 21 The The costs of delivering and administering the SITless intervention and the control will be identified 1 and measured alongside potential cost impacts, thus taking into account both costs incurred as well 2 as cost savings arising across arms. 3 In line with the NHS and personal social services perspective, two sources of costing have been taken 4 into account. First, the costs borne by the primary care/exercise facility to deliver the SITLESS 5 intervention and the control are considered. The SMS Intervention is tailored to each individual, thus 6 requiring the collection of individual-specific costs (e.g. duration of the contact, staff present, 7 transport costs sustained by staff and participants) using the SMS Intervention Cost Log. Average 8 costs (e.g. equipment used during the SMS sessions; refreshments) which are not likely to change 9 across individuals will also be identified and measured. Unlike the SMS, the delivery of the ERS and 10 UC interventions are entirely standardised. Thus, the associated cost can be regarded as uniform and 11 not as an individual-specific cost. Therefore, the data collection instruments (ERS and UC Cost Logs) 12 aims to capture the average cost sustained when delivering these interventions. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 o n l y 14 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 The within trial economic analysis will establish the expected cost-effectiveness of SMS+ERS 2 compared to ERS alone and UC through a number of different analyses. The main within-trial 3 analysis will be reported using a CUA framework, which will calculate the incremental cost per QALY 4 (calculated using EQ-5D). Further reporting will include the incremental cost per Year of Full 5 Capability (calculated using the ICECAP-O) of the SITLESS intervention vs. both the control groups. In 6 addition, the cost per unit of increased PA or reduction in SB will be calculated using a cost-7 effectiveness framework. 8 Furthermore, a CCA framework will be also implemented. Given the complex nature of the SITLESS 9 intervention, it's likely that all the relevant benefits of the intervention will not be captured by a 10 single utility measure or a single outcome measure. To this end, the CCA framework would facilitate 11 the presentation of a wider battery of outcomes collected within the SITLESS trial. Table 3 below 12 shows the health economics framework (CUA, CBA or CCA), and the related outcome measures, 13 perspectives and format for presenting results. 14 Statistical analysis 15 The multi-country nature of the SITLESS intervention implies that cost and outcome data fall 16 naturally in a hierarchical structure, meaning that multiple "micro-units" (individuals) are nested 17 within multiple macro-units (countries) [41 62]. Dealing with this hierarchical data structure will be 18 an important consideration for the economic evaluation analysis, and will allow appropriate 19 modelling of within and between country variability as well as the clustering effect of the 20 intervention itself [63 64]. However, if no significant degree of country-level clustering is found in 21 the SITLESS data, the estimation will rely upon widely used non-hierarchical models (e.g. a pooled 22 model with country fixed effects [58]). 23 An exploratory analysis will reveal country patterns in cost and effectiveness, as well as highlighting 24 the presence of outliers that -due to the small number of countries-may have a stronger impact on 25 economic results. 26 Addressing Uncertainty 2 Deterministic and stochastic sensitivity analysis will be performed to measure uncertainty around 3 parameters considered to be influencing the cost-effectiveness of the SITLESS intervention. 4 Deterministic, one-way sensitivity analysis will examine the impact that changes in the discount rate, 5 unit costs and utility weights would have on the main economic evaluation results. The impact of 6 assumptions regarding resource use and outcome valuation in a multi-country setting will also be 7 explored, including: a) multi-country/one-country(UK unit costs) costing approach; b) country 8 specific utility weights derived through the 'crosswalk' procedure/UK-based EQ-5D. When 9 appropriate, a tornado diagram may be used to explore the effect of a percentage change in each of 10 the key model parameters on the main outcome. 11 12 A two-way sensitivity analysis will explore the joint variation of cost and utility weights around a 13 range identified by the one-country and multi-country scenario, and will assess how the Incremental 14 Cost-Effectiveness Ratio (ICER) changes in the "extreme" cases. In addition, sensitivity to the 15 econometric specification used to model cross-country data clustering will be examined and taken 16 into account. Further sensitivity analysis might be required depending on the distributional 17 assumptions regarding cost and outcomes, as well as regarding the presence of outliers. 18 Probabilistic Sensitivity Analysis (PSA) around the longer-term estimates of costs, effects and cost-19 effectiveness of the ERS+SMS intervention versus ERS alone and UC will be performed using a 1000 20 iteration Monte Carlo simulation. PSA has the advantage of indicating the probability of a technology 21 being cost-effective at various thresholds of willingness to pay (WTP). A high probability of being 22 cost-effective should lead to a more positive outcome in a technology appraisal, whereas the 23 opposite should apply for a low probability [65]. Using Monte Carlo simulation, a bootstrapped 24 distribution of costs and QALY will be generated and incremental costs and QALY will be shown in a 25 cost-effectiveness plane. Cost-effectiveness acceptability curves (CEACs) will graphically represent 26 the probability that the intervention is cost-effective compared to the controls across a range of 27 cost-effectiveness thresholds. Representing the uncertainty of ICER across a range of WTP is a key 28 issue in the economic evaluation of the SITLESS intervention, given that the Willingness to Pay is 29 likely to be country-specific, reflecting a country's opportunity cost of undertaking the intervention 30 [66]. 31 32 Missing data 33 Following best practice [59 67 68] a multiple imputation procedure using chained equations (MICE) 34 will be used to impute missing data separately for each arm of the trial and predictive mean 35 matching will allow dealing with non-normality of cost and outcome data [69]. The procedure to deal 36 with missing data will take into account additional, SITLESS-specific, reasons for missingness related 37 to the fact that motivations and barriers for providing information might be age-related (e.g. 38 physical or cognitive weakness). Furthermore, an analysis of missing data by country will be 39 performed in order to identify any country specific pattern in the probability of missingness. 40 interventions implemented in a specific context, while, to the best of our knowledge, this is the first 28 RCT which focus on older adults and takes the multi-country setting into account. 29 The multi-country nature of the study poses additional methodological challenges for the economic 30 analysis. However, a multinational RCT has the potential to increase the generalizability of the 31 results, thus providing the policy maker with useful guidelines on the value for money provided by 32 complex interventions such as SITLESS. Furthermore, an appropriate sensitivity analysis in the long-33 term modelling of interventions effects will provide insights on their sustainability -taking into 34 account intervention costs and adherence to intervention programs similar to SITLESS that are 35 implemented in a 'real world' context. 36 In addition to dealing with the multi-country aspect of the study, the proposed economic evaluation 37 of SITless has accounted for several aspects related to the complexity of such intervention, 38 including: the existence of multiple, interacting components (physical activity and behavioural 39 component); number and difficulty of the behaviours required by those delivering the intervention; 40 interdisciplinary team involved; existence of externalities and spillovers (e.g. to family and informal 1 carers); interaction between users and providers and system-wide components). Such a complexity 2 does imply additional challenges for the economic evaluation, such as the need to consider a 3 plethora of outcomes to take the multi-disciplinary aspect of the intervention into account and the 4 design of data collection instruments balancing standardisation and country-tailoring. 5 Ethics and dissemination 6 The study design was approved by the Ethics and Research Committee of each intervention site: The 7 Ethics Germany). Participation is voluntary and all participants will be asked to sign informed consent 13 before the start of the study. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 Pending results Incremental costs and outcomes 19 For each intervention, report mean values for the main categories of estimated costs and outcomes of interest, as well as mean differences between the comparator groups. If applicable, report incremental cost-effectiveness ratios. Pending results Characterising uncertainty 20a Single study-based economic evaluation:Describe the effects of sampling uncertainty for the estimated incremental cost and incremental effectiveness parameters, together with the impact of methodological assumptions (such as discount rate, study perspective). Pending results 20b Model-based economic evaluation: Describe the effects on the results of uncertainty for all input parameters, and uncertainty related to the structure of the model and assumptions. Not applicable Characterising heterogeneity 21 If applicable, report differences in costs, outcomes, or cost-effectiveness that can be explained by variations between subgroups of patients with different baseline characteristics or other observed variability in effects that are not reducible by more information. Discussion (RCT) which aims to assess the short and long term effectiveness and cost-effectiveness of a complex 8 intervention on SB and PA in community dwelling older adults, based on exercise referral schemes 9 (ERS) enhanced by a group intervention providing self-management strategies (SMS) to encourage 10 lifestyle change. 11 12 Methods and analysis 13 A within trial economic evaluation and long-term model from both a National Health Service / 14 Personal Social Service perspective and a broader societal perspective will be undertaken alongside 15 the SITLESS multinational RCT. Health care costs (hospitalisations, accident and emergency visits, 16 appointment with health professionals) and social care costs (e.g. community care) will be included 17 in the economic evaluation. For the cost-utility analysis, Quality-adjusted life years (QALYs), will be 18 measured using the EQ-5D-5L and capability wellbeing measured using the ICECAP-O questionnaire. 19 Other effectiveness outcomes (health-related, behavioural, functional) will be incorporated into a 20 cost-effectiveness analysis and cost-consequences analysis. 21 The multi-national nature of this RCT implies a hierarchical structure of the data and unobserved 22 heterogeneity between clusters that needs to be adequately modelled with appropriate statistical 23 and econometric techniques. In addition, a long-term population health economic model will be 24 developed and will synthesise and extrapolate within-trial data with additional data extracted from 25 the literature linking PA and SB outcomes with longer term health states. 26 Methods guidance for population health economic evaluation will be adopted including the use of a 27 long time horizon, 1.5% discount rate for costs and benefits, cost consequence analysis framework 28 and a multi-sector perspective. 29 Ethics and dissemination 30 The study design was approved by the Ethics and Research Committee of each intervention site: The 31 Ethics and Research Committee of Ramon Llull University (reference number: 1314001P) (Fundació 32 Blanquerna, Spain), Germany). Participation is voluntary and all participants will be asked to sign informed consent 37 before the start of the study. 38 Commission is not responsible for any use that may be made of the information it contains. 3 The findings of the study will be disseminated to different target groups (academia, policy makers, 4 end-users) through different means following the national ethical guidelines and the dissemination 5 regulation of the Horizon 2020 funding agency. • First economic evaluation of a complex, public health intervention to improve health and 18 capability outcomes of community dwelling, insufficiently active older adults. 19 • Economic evaluation in a multi-country setting hence requires appropriate sensitivity of 20 results to the costing methodology and the econometric approach to deal with cross-country 21 heterogeneity. 22 • The protocol will provide useful guidance to design the economic evaluation of a complex 23 public health intervention in multi-country settings. 24 • Economic evaluation will be reported incorporating a broad set of preference-based health 25 and capability outcomes as well as effectiveness outcomes using cost-utility, cost-26 effectiveness and cost-consequence analysis. 27 • While considering SB alongside PA represents a strength over existing literature, long-term 28 modelling will need to rely on assumptions to combine PA and SB and validation of this may 29 not be possible until further evidence emerges. INTRODUCTION 32 Economics of inactivity and sedentary behaviour 33 An insufficient level of physical activity and prolonged sedentary behaviour (PA and SB, respectively, 34 henceforth) are associated with an increased risk of developing major diseases (e.g. breast and colon 35 cancer, type II diabetes, obesity and depression). Particularly, in the last decade, growing evidence 36 indicates that excessive sitting time may be harmful to health, independent of meeting the 37 recommended physical activity guidelines [1]. 38 39 PA and SB represent large costs to the healthcare system and society more broadly. In England, the 40 cost of physical inactivity among the general population (direct costs related to chronic diseases and 41 Euro in 2012 (6.2% of total healthcare expenditure across the EU-28). In this regard, reducing 3 inactivity by 20% among the adult population would result in a cost saving of 16.1 billion Euro [3]. 4 The burden of an inactive lifestyle is predicted to be increasing for older adults, which represent the 5 fastest growing segment of the world population [4], accounting for 30-40% of total healthcare 6 spending across Europe [5]. 7 An increase in the percentage of the total population who are older adults will be accompanied by 8 an increase in the incidence of diseases associated with old age such as cardiovascular disease, 9 cancer, type 2 diabetes, accidental falls, obesity, metabolic syndrome, mental disorders, and 10 musculoskeletal diseases [6]. Furthermore, the frailty associated with old age constitutes an 11 additional risk factor for adverse health outcomes (falls, hospitalisation, disability and death) [7]. 12 Maintaining or engaging in a physically active lifestyle and reducing SB may result in attenuating 13 cognitive and functional decline over time, alleviating the symptoms of various chronic conditions 14 associated to old age [8] and preventing or even reversing frailty [9 10]. 15 More broadly, an active lifestyle has the potential to increase the elderly wellbeing, in line with the 16 concept of 'active aging' and with the aim to "extend healthy life expectancy and quality of life for all 17 people as they age, including those who are frail, disabled and in need of care"[11]. 18 The substantial economic impact of an inactive lifestyle justifies the need for a robust health 19 economic evaluation to report the cost-effectiveness of interventions to promote active lifestyles to 20 reduce the likelihood of developing diseases and disability associated with old age and preventing 21 them. 23 Interventions to reduce SB or a lack of PA: economic evaluation evidence 24 Evidence regarding the cost-effectiveness of public health interventions directed towards the 25 increase of PA and reduction of SB is typically characterized by a substantial heterogeneity regarding 26 the type of implemented intervention and the target population The SITLESS Intervention 4 The SITLESS study is a multi-national, multi-centre, three-armed randomised controlled trial (RCT) 5 investigating the short and long-term cost-effectiveness of a complex intervention to increase PA 6 and reduce SB in older adults from four European countries. The cost-effectiveness of a joint 7 intervention of Exercise Referral Scheme (ERS) and Self-Management Strategies (SMS) will be 8 evaluated compared to two alternatives: ERS alone and usual care (UC, henceforth instruments were designed in collaboration with the trial team to collect information on the cost of 5 the ERS and SMS intervention, resources used by patients (e.g. usage of medical, social and 6 community services) and preference-based QOL and capability outcomes, at baseline and over the 7 trial follow-up (12 and 18 months post intervention) considering a health and social service 8 perspective and a broader social perspective. While the SITLESS complex intervention is 9 standardised, the data collection instruments were tailored to each country context (e.g. inclusion of 10 country-specific examples of community/social services). 11 The economic evaluation will follow the UK's most recent guidance for the economic evaluation of 12 public health interventions NICE [39], as well as CHEERS guidelines for reporting results [40]. 13 Study population 14 The target population is community-dwelling older adults who fulfil the following criteria: aged 65 or 15 above; able to walk independently for at least 2 minutes; have no major physical limitations (i.e. 16 obtaining <4 in the Short Physical Performance Battery); and who are insufficiently active (perform 17 regular physical activity) for at least 30 minutes five or more days of the week). All individuals will be 18 recruited according to country-specific primary prevention pathways. Overall, according to the 19 sample size estimation, 1,338 individuals will be recruited for the trial (446 per group). 20 21 Setting and location 22 The SITLESS trial is a multi-country, multi-centre trial. The intervention will be delivered in Primary 23 care or community settings in four sites: Barcelona (Spain), Odense (Denmark), Ulm (Germany) and 24 Belfast (UK The cost effectiveness of a joint intervention of ERS and SMS will be compared (18 months) to two 1 control groups: 1) ERS alone; 2) usual care (i.e. written general booklet standardised across sites, 2 including the WHO recommendation regarding PA regular practice for health, and two sessions on 3 healthy ageing regarding fall prevention and healthy nutrition). 4 Patient and Public Involvement 5 SITLESS, as a Responsible Research and Innovation project, has created guidance for the involvement 6 of several stakeholders in the project from the onset. They comprise older adults of both genders, 7 representatives of older adults' associations, primary healthcare and sport professionals, policy-8 makers and other local stakeholders of relevance (e.g. health insurance, where relevant). 9 Accordingly, four local advisory boards were created at the beginning of the project, one on each 10 intervention site (Barcelona, Odense, Belfast, Ulm) and were periodically involved in the study from 11 its onset. The development of the research question and outcome measures were shared with each 12 advisory board and therefore informed according to patients' priorities and motivations, experience, 13 and preferences. We also did a literature review that included how older adults perceive physical 14 activity and sedentary behaviour, and how could we achieve sustained changes of behaviour to 15 enhance health. 16 The involvement of stakeholders as primary, secondary and tertiary end-users in the design of the 17 study was specifically in the intervention design. We explored experiences, preferences and 18 priorities of older adults regarding behaviour change through focus groups that were convened 19 thanks to the older people organizations belonging to the local advisory boards. We took into 20 account their contributions at each site, and the main results were included in the intervention 21 design. 22 Local advisory boards also discussed and provided their contributions to the challenges faced 23 regarding recruitment, retention of participants in the study and the dissemination strategies. 24 Qualitative interviews were conducted with a purposeful sample of participants in each intervention 25 site and from each arm of the trial to explore their perceptions on the intervention. 26 Once the trial ends, we are planning on disseminating the results at each primary care center and 27 local leisure centers to end-users, health professionals and relevant stakeholders. We would like to 28 share our results to Citizen Science events, also involving participants of each site. 29 30 Study perspective 31 The economic evaluation will be conducted from a health service and personal social services 32 perspective as recommended by UK's NICE guidelines [39]. To this end, health and non-health care 33 costs (and cost savings), incurred by both the provider and the participant will be considered. 34 According to NICE guidelines, that suggest emphasising overall welfare, rather than health per se, a 35 cost-effectiveness analysis from a personal social service perspective and a cost-consequence 36 analysis adopting a broader societal perspective may be performed as well. 37 38 Time horizon 39 The assessment of the primary within-trial economic analysis will be conducted at baseline, post-1 intervention and at 12 and 18 months follow-up. The economic evaluation includes a long term 2 model to extrapolate the cost-effectiveness results beyond the 18-months within trial component. 3 4 Discount Rate 5 Following UK's NICE public health economic evaluation guidelines [39], a discount rate of 1.5% will 6 be employed and sensitivity analysis will explore the impacts of rates of 3.5% and 6%. 7 8 Measures of Outcome 9 The measures of outcome employed in the economic evaluation and the timing of their collection 10 are presented in Table 1. While QALYs are the main outcome measure to be used in a cost-utility 11 analysis (CUA) framework, cost-effectiveness analysis (CEA) and cost-consequence analysis (CCA) will 12 make use of the broader set of outcomes collected within trial. 13 QALYs will be estimated using the EQ-5D-5L The EQ-5D-5L questionnaire measures health-related quality of life (HRQOL) in terms of five 2 dimensions (mobility, self-care, daily activities, pain and discomfort, anxiety and depression) in a 1-5 3 scale. It also includes a visual analogue scale on which patients rate their own health between 0 4 (best imaginable health state) and 100 (worst imaginable health state). Assigning weights to each 5 response of the five dimensions, it is possible to generate a synthetic index that will summarize the 6 health-related quality of life at the individual level. The EQ-5D has been used by several studies 7 examining the cost-effectiveness of ERS as a measure of HRQOL [17 19 48]. 8 EQ-5D utility scores will be derived using UK-tariffs. In consideration of the multinational aspect of 9 the analysis, the quality adjustments weights for each health state at different periods should be 10 obtained by using country-specific EQ-5D tariffs, which reflect country-specific differences in health 11 perceptions and preferences and might significantly affect cost-utility analysis [49-51]. However, so 12 far country-specific sets of tariffs for the EQ-5D-5L have not been directly elicited in any of the 13 SITLESS countries with a validated procedure (value sets for England only are available) and in line 14 with a recent NICE position statement [52] we will make use of the "crosswalk" procedure, 15 developed by The EuroQol group to link the EQ-5D-5L and EQ-5D-3L. Crosswalk value sets for the 16 EQ-5D-5L are currently available for all the countries participating in the SITless study [53]. 17 The utility value derived from the EQ-5D-5L questionnaire will be used to derive QALYs using 18 standard area under the curve (AUC) methods, eventually adjusted for group-specific differences in 19 baseline utility [54]. 20 ICECAP-O 21 The The costs of delivering and administering the SITless intervention and the control will be identified 1 and measured alongside potential cost impacts, thus taking into account both costs incurred as well 2 as cost savings arising across arms. 3 In line with the NHS and personal social services perspective, two sources of costing have been taken 4 into account. First, the costs borne by the primary care/exercise facility to deliver the SITLESS 5 intervention and the control are considered. The SMS Intervention is tailored to each individual, thus 6 requiring the collection of individual-specific costs (e.g. duration of the contact, staff present, 7 transport costs sustained by staff and participants) using the SMS Intervention Cost Log. Average 8 costs (e.g. equipment used during the SMS sessions; refreshments) which are not likely to change 9 across individuals will also be identified and measured. Unlike the SMS, the delivery of the ERS and 10 UC interventions are entirely standardised. Thus, the associated cost can be regarded as uniform and 11 not as an individual-specific cost. Therefore, the data collection instruments (ERS and UC Cost Logs) 12 aims to capture the average cost sustained when delivering these interventions. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 F o r p e e r r e v i e w o n l y 14 An overview of resource use and cost measures to be employed in the economic evaluation is 1 presented in Table 2, while Appendix 2 provides a summary of the unit cost sources that will be used 2 to value resource use. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 The within trial economic analysis will establish the expected cost-effectiveness of SMS+ERS 2 compared to ERS alone and UC through a number of different analyses. The main within-trial 3 analysis will be reported using a CUA framework, which will calculate the incremental cost per QALY 4 (calculated using EQ-5D). Further reporting will include the incremental cost per Year of Full 5 Capability (calculated using the ICECAP-O) of the SITLESS intervention vs. both the control groups. In 6 addition, the cost per unit of increased PA or reduction in SB will be calculated using a cost-7 effectiveness framework. 8 Furthermore, a CCA framework will be also implemented. Given the complex nature of the SITLESS 9 intervention, it's likely that all the relevant benefits of the intervention will not be captured by a 10 single utility measure or a single outcome measure. To this end, the CCA framework would facilitate 11 the presentation of a wider battery of outcomes collected within the SITLESS trial. Table 3 below 12 shows the health economics framework (CUA, CBA or CCA), and the related outcome measures, 13 perspectives and format for presenting results. 14 Addressing Uncertainty 2 Deterministic and stochastic sensitivity analysis will be performed to measure uncertainty around 3 parameters considered to be influencing the cost-effectiveness of the SITLESS intervention. 4 Deterministic, one-way sensitivity analysis will examine the impact that changes in the discount rate, 5 unit costs and utility weights would have on the main economic evaluation results. The impact of 6 assumptions regarding resource use and outcome valuation in a multi-country setting will also be 7 explored, including: a) multi-country/one-country(UK unit costs) costing approach; b) country 8 specific utility weights derived through the 'crosswalk' procedure/UK-based EQ-5D. When 9 appropriate, a tornado diagram may be used to explore the effect of a percentage change in each of 10 the key model parameters on the main outcome. 11 12 A two-way sensitivity analysis will explore the joint variation of cost and utility weights around a 13 range identified by the one-country and multi-country scenario, and will assess how the Incremental 14 Cost-Effectiveness Ratio (ICER) changes in the "extreme" cases. In addition, sensitivity to the 15 econometric specification used to model cross-country data clustering will be examined and taken 16 into account. Further sensitivity analysis might be required depending on the distributional 17 assumptions regarding cost and outcomes, as well as regarding the presence of outliers. 18 Probabilistic Sensitivity Analysis (PSA) around the longer-term estimates of costs, effects and cost-19 effectiveness of the ERS+SMS intervention versus ERS alone and UC will be performed using a 1000 20 iteration Monte Carlo simulation. PSA has the advantage of indicating the probability of a technology 21 being cost-effective at various thresholds of willingness to pay (WTP). A high probability of being 22 cost-effective should lead to a more positive outcome in a technology appraisal, whereas the 23 opposite should apply for a low probability [65]. Using Monte Carlo simulation, a bootstrapped 24 distribution of costs and QALY will be generated and incremental costs and QALY will be shown in a 25 cost-effectiveness plane. Cost-effectiveness acceptability curves (CEACs) will graphically represent 26 the probability that the intervention is cost-effective compared to the controls across a range of 27 cost-effectiveness thresholds. Representing the uncertainty of ICER across a range of WTP is a key 28 issue in the economic evaluation of the SITLESS intervention, given that the Willingness to Pay is 29 likely to be country-specific, reflecting a country's opportunity cost of undertaking the intervention 30 [66]. 31 32 Missing data 33 Following best practice [59 67 68] a multiple imputation procedure using chained equations (MICE) 34 will be used to impute missing data separately for each arm of the trial and predictive mean 35 matching will allow dealing with non-normality of cost and outcome data [69]. The procedure to deal 36 with missing data will take into account additional, SITLESS-specific, reasons for missingness related 37 to the fact that motivations and barriers for providing information might be age-related (e.g. 38 physical or cognitive weakness). Furthermore, an analysis of missing data by country will be 39 performed in order to identify any country specific pattern in the probability of missingness. 40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 interventions implemented in a specific context, while, to the best of our knowledge, this is the first 28 RCT which focus on older adults and takes the multi-country setting into account. 29 The multi-country nature of the study poses additional methodological challenges for the economic 30 analysis. However, a multinational RCT has the potential to increase the generalizability of the 31 results, thus providing the policy maker with useful guidelines on the value for money provided by 32 complex interventions such as SITLESS. Furthermore, an appropriate sensitivity analysis in the long-33 term modelling of interventions effects will provide insights on their sustainability -taking into 34 account intervention costs and adherence to intervention programs similar to SITLESS that are 35 implemented in a 'real world' context. 36 In addition to dealing with the multi-country aspect of the study, the proposed economic evaluation 37 of SITless has accounted for several aspects related to the complexity of such intervention, 38 including: the existence of multiple, interacting components (physical activity and behavioural 39 component); number and difficulty of the behaviours required by those delivering the intervention; 40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 F o r p e e r r e v i e w o n l y 24 interdisciplinary team involved; existence of externalities and spillovers (e.g. to family and informal 1 carers); interaction between users and providers and system-wide components). Such a complexity 2 does imply additional challenges for the economic evaluation, such as the need to consider a 3 plethora of outcomes to take the multi-disciplinary aspect of the intervention into account and the 4 design of data collection instruments balancing standardisation and country-tailoring. 5 Ethics and dissemination 6 The study design was approved by the Ethics and Research Committee of each intervention site: The 7 Ethics and Research Committee of Ramon Llull University (reference number: 1314001P) (Fundació 8 Blanquerna, Spain), Germany). Participation is voluntary and all participants will be asked to sign informed consent 13 before the start of the study. 14 The findings of the study will be disseminated to different target groups (academia, policy makers, 15 end-users) through different means following the national ethical guidelines and the dissemination 16 regulation of the Horizon 2020 funding agency. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 For each intervention, report mean values for the main categories of estimated costs and outcomes of interest, as well as mean differences between the comparator groups. If applicable, report incremental cost-effectiveness ratios. Pending results Characterising uncertainty 20a Single study-based economic evaluation:Describe the effects of sampling uncertainty for the estimated incremental cost and incremental effectiveness parameters, together with the impact of methodological assumptions (such as discount rate, study perspective). Pending results 20b Model-based economic evaluation: Describe the effects on the results of uncertainty for all input parameters, and uncertainty related to the structure of the model and assumptions. Not applicable Characterising heterogeneity 21 If applicable, report differences in costs, outcomes, or cost-effectiveness that can be explained by variations between subgroups of patients with different baseline characteristics or other observed variability in effects that are not reducible by more information.
2018-11-01T20:38:12.718Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "7813c7f090654ff168deb445f7c622c160381fdb", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/10/e022266.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce001eba29e797b08e7996d626f69a1ec4189778", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
10958925
pes2o/s2orc
v3-fos-license
shapeDTW: shape Dynamic Time Warping Dynamic Time Warping (DTW) is an algorithm to align temporal sequences with possible local non-linear distortions, and has been widely applied to audio, video and graphics data alignments. DTW is essentially a point-to-point matching method under some boundary and temporal consistency constraints. Although DTW obtains a global optimal solution, it does not necessarily achieve locally sensible matchings. Concretely, two temporal points with entirely dissimilar local structures may be matched by DTW. To address this problem, we propose an improved alignment algorithm, named shape Dynamic Time Warping (shapeDTW), which enhances DTW by taking point-wise local structural information into consideration. shapeDTW is inherently a DTW algorithm, but additionally attempts to pair locally similar structures and to avoid matching points with distinct neighborhood structures. We apply shapeDTW to align audio signal pairs having ground-truth alignments, as well as artificially simulated pairs of aligned sequences, and obtain quantitatively much lower alignment errors than DTW and its two variants. When shapeDTW is used as a distance measure in a nearest neighbor classifier (NN-shapeDTW) to classify time series, it beats DTW on 64 out of 84 UCR time series datasets, with significantly improved classification accuracies. By using a properly designed local structure descriptor, shapeDTW improves accuracies by more than 10% on 18 datasets. To the best of our knowledge, shapeDTW is the first distance measure under the nearest neighbor classifier scheme to significantly outperform DTW, which had been widely recognized as the best distance measure to date. Our code is publicly accessible at: https://github.com/jiapingz/shapeDTW. INTRODUCTION D YNAMIC time warping (DTW) is an algorithm to align temporal sequences, which has been widely used in speech recognition [29], human motion animation [15], human activity recognition [22] and time series classification [6]. DTW allows temporal sequences to be locally shifted, contracted and stretched, and under some boundary and monotonicity constraints, it searches for a global optimal alignment path. DTW is essentially a point-to-point matching algorithm, but it additionally enforces temporal consistencies among matched point pairs. If we distill the matching component from DTW, the matching is executed by checking the similarity of two points based on their Euclidean distance. Yet, matching points based solely on their coordinate values is unreliable and prone to error, therefore, DTW may generate perceptually nonsensible alignments, which wrongly pair points with distinct local structures (see Fig.1 (c)). This partially explains why the nearest neighbor classifier under the DTW distance measure is less interpretable than the shapelet classifier [35]: although DTW does achieve a global minimal score, the alignment process itself takes no local structural information into account, possibly resulting in an alignment with little semantic meaning. In this paper, we propose a novel alignment algorithm, named shape Dynamic Time Warping (shapeDTW), which enhances DTW by incorporating point-wise local structures into the matching process. As a result, we obtain perceptually interpretable alignments: similarly-shaped structures are preferentially matched based on their degree of similarity. We further quantitatively evaluate alignment paths against the ground-truth alignments, and shapeDTW achieves much lower alignment errors than DTW on Manuscript, June, 2016. Pipeline of shapeDTW. shapeDTW consists of two major steps: encode local structures by shape descriptors and align descriptor sequences by DTW. Concretely, we sample a subsequence from each temporal point, and further encode it by some shape descriptor. As a result, the original time series is converted into a descriptor sequence of the same length. Then we align two descriptor sequences by DTW and transfer the found warping path to the original time series. both simulated and real sequence pairs. An alignment example by shapeDTW is shown in Fig.1 (d). Point matching is a well studied problem in the computer vision community, widely known as image matching. In order to search corresponding points from two distinct images taken from the same scene, a quite naive way is to compare their pixel values. But pixel values at a point lacks spatial neighborhood context, making it less discriminative for that point; e.g., a tree leaf pixel from one image may have exactly the same RGB values as a grass pixel from the other image, but these two pixels are not corresponding pixels and should not be matched. Therefore, a routine for image matching is to describe points by their surrounding image patches, and then compare the similarities of point descriptors. Since point descriptors designed in this way encode image structures around local neighborhoods, they are more distinctive and discriminative than single pixel values. In early days, raw image patches were used as point descriptors [1], and now more powerful descriptors like SIFT [27] are widely adopted since they capture local image structures very well and are invariant to image scale and rotation. Intuitively, local neighborhood patches make points more discriminative from other points, while matching based on RGB pixel values is brittle and results in high false positives. However, the matching component in the traditional DTW bears the same weakness as image matching based on single pixel values, since similarities between temporal points are measured by their coordinates, instead of by their local neighborhoods. An analogous remedy for temporal matching hence is: first encode each temporal point by some descriptor, which captures local subsequence structural information around that point, and then match temporal points based on the similarity of their descriptors. If we further enforce temporal consistencies among matchings, then comes the algorithm proposed in the paper: shapeDTW. shapeDTW is a temporal alignment algorithm, which consists of two sequential steps: (1) represent each temporal point by some shape descriptor, which encodes structural information of local subsequences around that point; in this way, the original time series is converted into a sequence of descriptors. (2) use DTW to align two sequences of descriptors. Since the first step takes linear time while the second step is a typical DTW, which takes quadratic time, the total time complexity is quadratic, indicating that shapeDTW has the same computational complexity as DTW. However, compared with DTW and its variants (derivative Dynamic Time Warping (dDTW) [19] and weighted Dynamic Time Warping (wDTW) [17]), it has two clear advantages: (1) shapeDTW obtains lower alignment errors than DTW/dDTW/wDTW on both artificially simulated aligned sequence pairs and real audio signals; (2) the nearest neighbor classifier under the shapeDTW distance measure (NN-shapeDTW) significantly beats NN-DTW on 64 out of 84 UCR time series datasets [6]. NN-shapeDTW outperforms NN-dDTW/NN-wDTW significantly as well. Our shapeDTW time series alignment procedure is shown in Fig. 2. Extensive empirical experiments have shown that a nearest neighbor classifier with the DTW distance measure (NN-DTW) is the best choice to date for most time series classification problems, since no alternative distance measures outperforms DTW significantly [28], [30], [34]. However, in this paper, the proposed temporal alignment algorithm, shapeDTW, if used as a distance measure under the nearest neighbor classifier scheme, significantly beats DTW. To the best of our knowledge, shapeDTW is the first distance measure that outperforms DTW significantly. Our contributions are several fold: (1) we propose a temporal alignment algorithm, shapeDTW, which is as efficient as DTW (dDTW, wDTW) but achieves quantitatively better alignments than DTW (dDTW, wDTW); (2) Working under the nearest neighbor classifier as a distance measure to classify 84 UCR time series datasets, shapeDTW, under all tested shape descriptors, outperforms DTW significantly; (3) shapeDTW provides a quite generic alignment framework, and users can design new shape descriptors adapted to their domain data characteristics and then feed them into shapeDTW for alignments. RELATED WORK Since shapeDTW is developed for sequence alignment, here we first review research work related to sequence alignment. DTW is a typical sequence alignment algorithm, and there are many ways to improve DTW to obtain better alignments. Traditionally, we could enforce global warping path constraints to prevent pathological warpings [29], and several typical such global warping constraints include Sakoe-Chiba band and Itakura Parallelogram. Similarly, we could choose to use different step patterns in different applications: apart from the widely used step pattern -"symmet-ric1", there are other popular steps patterns like "symmetric2", "asymmetric" and "RabinerJuangStepPattern" [13]. However, how to choose an appropriate warping band constraint and a suitable step pattern depends on our prior knowledge on the application domains. There are several recent works to improve DTW alignment. In [19], to get the intuitively correct "feature to feature" alignment between two sequences, the authors introduced derivative dynamic time warping (dDTW), which computes first-order derivatives of time series sequences, and then aligns two derivative sequences by DTW. In [17], the authors developed weighted DTW (wDTW), which is a penalty-based DTW. wDTW takes the phase difference between two points into account when computing their distances. Batista et al [3] proposed a complexity-invariant distance measure, which essentially rectifies an existing distance measure (e.g., Euclidean, DTW) by multiplying a complexity correction factor. Although they achieve improved results on some datasets by rectifying the DTW measure, they do not modify the original DTW algorithm. In [23], the authors proposed to learn a distance metric, and then align temporal sequences by DTW under this new metric. One major drawback is the requirement of ground truth alignments for metric learning, because in reality true alignments are usually unavailable. In [5], the authors proposed to utilize time series local structure information to constrain the search of the warping path. They introduce a SIFT-like feature point detector and descriptor to detect and match salient feature points from two sequences first, and then use matched point pairs to regularize the search scope of the warping path. Their major initiative is to improve the computational efficiency of dynamic time warping by enforcing band constraints on the potential warping paths, such that they do not have to compute the full accumulative distance matrix between the two sequences. Our method is sufficiently different from theirs in following aspects: first, we have no notion of feature points, while feature points are key to their algorithm, since feature points help to regularize downstream DTW; second, our algorithm aims to achieve better alignments, while their algorithm attempts to improve the computational efficiency of the traditional DTW. In [28], the authors focus on improving the efficiency of the nearest neighbor classifier under the DTW distance measure, but they keep the traditional DTW algorithm unchanged. Our algorithm, shapeDTW, is different from the above works in that: we measure similarities between two points by computing similarities between their local neighborhoods, while all the above works compute the distance between two points based on their single-point y-values (derivatives). Since shapeDTW can be applied to classify time series (e.g., NN-shapeDTW), we review representative time series classifi-cation algorithms. In [25], the authors use the popular Bag-of-Words to represent time series instances, and then classify the representations under the nearest neighbor classifier. Concretely, it discretizes time series into local SAX [24] words, and uses the histogram of SAX words as the time series representation. In [31], the authors developed an algorithm to first extract classmembership discriminative shapelets, and then learn a decision tree classifier based on distances between shapelets and time series instances. In [33], they first represent time series using recurrent plots, and then measure the similarity between recurrence plots using Campana-Keogh (CK-1) distance (PRCD). PRCD distance is used as the distance measure under the one-nearest neighbor classifier to do classification. In [4], a bag-of-feature framework to classify time series is introduced. It uses a supervised codebook to encode time series instances, and then uses random forest classifier to classify the encoded time series. In [14], the authors first encode time series as a bag-of-patterns, and then use polynomial kernel SVM to do the classification. Zhao and Itti [37] proposed to first encode time series by the 2nd order encoding method -Fisher Vectors, and then classify encoded time series by a linear kernel SVM. In their paper, subsequences are sampled from both feature points and flat regions. shapeDTW is different from above works in that: shapeDTW is developed to align temporal sequences, but can be further applied to classify time series. However, all above works are developed to classify time series, and they are incapable to align temporal sequences at their current stages. Since time series classification is only one application of shapeDTW, we compare NN-shapeDTW against the above time series classification algorithms in the supplementary materials. The paper is organized as follows: the detailed algorithm for shapeDTW is introduced in Sec.3, and in Sec.4 we introduce several local shape descriptors. Then we extensively test shapeDTW for both sequence alignments and time series classification in Sec. 6, and conclusions are drawn in Sec.7. SHAPE DYNAMIC TIME WARPING In this section, we introduce a temporal alignment algorithm, shapeDTW. First we introduce DTW briefly. Dynamic Time Warping DTW is an algorithm to search for an optimal alignment between two temporal sequences. It returns a distance measure for gauging similarities between them. Sequences are allowed to have local non-linear distortions in the time dimension, and DTW handles local warpings to some extent. DTW is applicable to both univariate and multivariate time series, and here for simplicity we introduce DTW in the case of univariate time series alignment. A univariate time series T is a sequence of real values, i.e., T = (t 1 , t 2 , ..., t L ) T . Given two sequences P and Q of possible different lengths L P and L Q , namely P = (p 1 , p 2 , ..., p L P ) T and Q = (q 1 , q 2 , ..., q L Q ) T , and let D(P, Q) ∈ R L P ×L Q be an pairwise distance matrix between sequences P and Q, where D(P, Q) i,j is the distance between p i and p j . One widely used pairwise distance measure is the Euclidean distance, i.e., D(P, Q) i,j = |p i − q j |. The goal of temporal alignment between P and Q is to find two sequences of indices α and β of the same length l (l ≥ max(L P , L Q )), which match index α(i) in the time series P to index β(i) in the time series Q, such that the total cost along the matching path l i=1 D(P, Q) α(i),β(i) is minimized. The alignment path (α, β) is constrained to satisfies boundary, monotonicity and continuity conditions [12], [20], [32]: (1) Given an alignment path (α, β), we define two warping matrices W P ∈ {0, 1} l×L P and W Q ∈ {0, 1} l×L Q for P and Q respectively, such that W P (i, α(i)) = 1, otherwise W P (i, j) = 0, and similarly W Q (i, β(i)) = 1, otherwise W Q (i, j) = 0. Then the total cost along the matching path l i=1 D(P, Q) α(i),β(i) is equal to W P · P − W Q · Q 1 , thus searching for the optimal temporal matching can be formulated as the following optimization problem: (2) Program 2 can be solved efficiently in O(L P ×L Q ) time by a dynamic programming algorithm [10]. Various different moving patterns and temporal window constraints [32] can be enforced, but here we consider DTW without warping window constraints and taking moving patterns as in (1). shape Dynamic Time Warping DTW finds a global optimal alignment under certain constraints, but it does not necessarily achieve locally sensible matchings. Here we incorporate local shape information around each point into the dynamic programming matching process, resulting in more semantically meaningful alignment results, i.e., points with similar local shapes tend to be matched while those with dissimilar neighborhoods are unlikely to be matched. shapeDTW consists of two steps: (1) represent each temporal point by some shape descriptor; and (2) align two sequences of descriptors by DTW. We first introduce the shapeDTW alignment framework, and in the next section, we introduce several local shape descriptors. Given a univariate time series T = (t 1 , t 2 , ..., t L ) T , T ∈ R L , shapeDTW begins by representing each temporal point t i by a shape descriptor d i ∈ R m , which encodes structural information of temporal neighborhoods around t i , in this way, the original real value sequence T = (t 1 , t 2 , ..., t L ) T is converted to a sequence of shape descriptors of the same length, i.e., d = (d 1 , d 2 , ..., d L ) T , d ∈ R L×m . shapeDTW then aligns the transformed multivariate descriptor sequences d by DTW, and at last the alignment path between descriptor sequences is transferred to the original univariate time series sequences. We give implementation details of shapeDTW: Given a univariate time series of length L, e.g.,T = (t 1 , t 2 , ..., t L ) T , we first extract a subsequence s i of length l from each temporal point t i . The subsequence s i is centered on t i , with its length l typically much smaller than L(l L). Note we have to pad both ends of T by l 2 with duplicates of t 1 (t L ) to make subsequences sampled at endpoints well defined. Now we obtain a sequence of subsequences, i.e., S = (s 1 , s 2 , ..., s L ) T , s i ∈ R l , with s i corresponding to the temporal point t i . Next, we design shape descriptors to express subsequences, under the goal that similarly-shaped subsequences have similar descriptors while differently-shaped subsequences have distinct descriptors. The shape descriptor of subsequence s i naturally encodes local structural information around the temporal point t i , and is named as shape descriptor of the temporal point Algorithm 1 shape Dynamic Time Warping Inputs: univariate time series P ∈ R L P and Q ∈ R L Q ; subsequence length l; shape descriptor function F shapeDTW: 1. Sample subsequences: S P ← P, S Q ← Q; 2. Encode subsequences by shape descriptors: Align descriptor sequences d P and d Q by DTW. Outputs: warping matrices:W P * andW Q * ; shapeDTW distance: t i as well. Designing a shape descriptor boils down to designing a mapping function F(·), which maps subsequence s i ∈ R l to shape descriptor d i ∈ R m , i.e., d i = F(s i ), so that similarity between descriptors can be measured simply with the Euclidean distance. Different mapping functions define different shape descriptors, and one straightforward mapping function is the identity function I(·), in this case, d i = I(s i ) = s i , i.e., subsequence itself acts as local shape descriptor. Given a shape descriptor computation function F(·), we convert the subsequence sequence S to a descriptor sequence T . At last, we use DTW to align two descriptor sequences and transfer the warping path to the original univariate time series. Given two univariate time series ×m be their shape descriptor sequences respectively, shapeDTW alignment is equivalent to solving the optimization problem: (3) WhereW P andW Q are warping matrices of d P and d Q , and · 1,2 is the 1 / 2 -norm of matrix, i.e., M p×n 1,2 = p i=1 M i 2 , where M i is the i th row of matrix M. Program 3 is a multivariate time series alignment problem, and can be effectively solved by dynamic programming in time O(L P ×L Q ). The key difference between DTW and shapeDTW is that: DTW measures similarities between p i and q j by their Euclidean distance |p i − q j |, while shapeDTW uses the Euclidean distance between their shape descriptors, i.e., d P i − d Q j 2 , as the similarity measure. shapeDTW essentially handles local non-linear warping, since it is inherently DTW, and, on the other hand, it prefers matching points with similar neighborhood structures to points with similar values. shapeDTW algorithm is described in Algo.1. SHAPE DESCRIPTORS shapeDTW provides a generic alignment framework, and users can design shape descriptors adapted to their domain data characteristics and feed them into shapeDTW for alignments. Here we introduce several general shape descriptors, each of which maps a subsequence s i to a vector representation d i , i.e., d i = F(s i ). The length l of subsequences defines the size of neighborhoods around temporal points. When l = 1, no neighborhood information is taken into account. With increasing l, larger neighborhoods are considered, and in the extreme case when l = L (L is the length of the time series), subsequences sampled from different temporal points become the same, i.e., the whole time series, in which case, shape descriptors of different points resemble each other too much, making temporal points less identifiable by shape descriptors. In practice, l is set to some appropriate value. But in this section, we first let l be any positive integers (l ≥ 1), which does not affect the definition of shape descriptors. In Sec.6, we will experimentally explore the sensitivity of NN-shapeDTW to the choice of l. Raw-Subsequence Raw subsequence s i sampled around point t i can be directly used as the shape descriptor of t i , i.e., d i = I(s i ) = s i , where I(·) is the identity function. Although simple, it inherently captures the local subsequence shape and helps to disambiguate points with similar values but different local shapes. PAA Piecewise aggregate approximation (PAA) is introduced in [18], [36] to approximate time series. Here we use it to approximate subsequences. Given a l-dimensional subsequence s i , it is divided into m (m ≤ l) equal-lengthed intervals, the mean value of temporal points falling within each interval is calculated and a vector of these mean values gives the approximation of s i and is used as the shape descriptor d i of s i , i.e., F(·) = P AA, d i = P AA(s i ). DWT Discrete Wavelet Transform (DWT) is another widely used technique to approximate time series instances. Again, here we use DWT to approximate subsequences. Concretely, we use a Haar wavelet basis to decompose each subsequence s i into 3 levels. The detail wavelet coefficients of all three levels and the approximation coefficients of the third level are concatenated to form the approximation, which is used the shape descriptor d i of s i , i.e., F(·) = DW T, d i = DW T (s i ). Slope All the above three shape descriptors encode local shape information inherently. However, they are not invariant to y-shift, to be concrete, given two subsequences p, q of exactly the same shape, but p is a y-shifted relative to q, e.g., p = q + ∆ · 1, where ∆ is the magnitude of y-shift, then their shape descriptors under Raw-Subsequence, PAA and DWT differ approximately by ∆ as well, i.e., d(p) ≈ d(q) + ∆ · 1. Although magnitudes do help time series classification, it is also desirable that similarlyshaped subsequences have similar descriptors. Here we further exploit three shape descriptors in experiments, Slope, Derivative and HOG1D, which are invariant to y-shift. Slope is extracted as a feature and used in time series classification in [4], [8]. Here we use it to represent subsequences. Given a l-dimensional subsequence s i , it is divided into m (m ≤ l) equal-lengthed intervals. Within each interval, we employ the total least square (TLS) line fitting approach [11] to fit a line according to points falling within that interval. By concatenating the slopes of the fitted lines from all intervals, we obtain a m-dimensional vector representation, which is the slope representation of s i , i.e., Derivative Similar to Slope, Derivative is y-shift invariant if it is used to represent shapes. Given a subsequence s, its first-order derivative sequence is s , where s is the first order derivative according to time t. To keep consistent with derivatives used in derivative Dynamic Time Warping [19] (dDTW), we follow their formula to compute numeric derivatives. HOG1D HOG1D is introduced in [37] to represent 1D time series sequences. It inherits key concepts from the histogram of oriented gradients (HOG) descriptor [7], and uses concatenated gradient histograms to represent shapes of temporal sequences. Similarly to Slope and Derivative descriptors, HOG1D is invariant to y-shift as well. In experiments, we divide a subsequence into 2 nonoverlapping intervals, compute gradient histograms (under 8 bins) in each interval and concatenate two histograms as the HOG1D descriptor (a 16D vector) of that subsequence. We refer interested readers to [37] for computation details of HOG1D. We have to emphasize that: in [37], the authors introduce a global scaling factor σ and tune it using all training sequences; but here, we fix σ to be 0.1 in all our experiments, therefore, HOG1D computation on one subsequence takes only linear time O(l), where l is the length of that subsequence. See our published code for details. Compound shape descriptors Shape descriptors, like HOG1D, Slope and Derivative, are invariant to y-shift. However, in the application of matching two subsequences, y-magnitudes may sometimes be important cues as well, e.g., DTW relies on point-wise magnitudes for alignments. Shape descriptors, like Raw-Subsequence, PAA and DWT, encode magnitude information, thus they complement y-shift invariant descriptors. By fusing pure-shape capturing and magnitude-aware descriptors, the compound descriptor has the potential to become more discriminative of subsequences. In the experiments, we generate compound descriptors by concatenating two complementary descriptors, i.e., d = (d A , γd B ), where γ is a weighting factor to balance two simple descriptors, and d is the generated compound descriptor. ALIGNMENT QUALITY EVALUATION Here we adopt the "mean absolute deviation" measure used in the audio literature [21] to quantify the proximity between two alignment paths. "Mean absolute deviation" is defined as the mean distance between two alignment paths, which is positively proportional to the area between two paths. Intuitively, two spatially proximate paths have small between-areas, therefore low "Mean absolute deviation". Formally, given a reference sequence P, a target sequence Q and two alignment paths α, β between them, the Mean absolute deviation between α and β is calculate as: δ(α, β) = A(α, β)/L P , where A(α, β) is the area between α and β and L P is the length of the reference sequence P. Fig. 3 shows two alignment paths α, β, blue and red curves, between P and Q. A(α, β) is the area of the slashed region, and in practice, it is computed by counting the number of cells falling within it. Here a cell (i, j) refers to the position (i, j) in the pairwise distance matrix D(P, Q) ∈ R L P ×L Q between P and Q. P: reference sequence Q: target sequence proximity between two alignment paths Fig. 3. "Mean absolute deviation", which measures the proximity between alignment paths. The red and blue curves are two alignment paths between sequences P and Q, and "Mean absolute deviation" between these two paths is defined as: the area of the slashed region divided by the length of the reference sequence P. EXPERIMENTAL VALIDATION We test shapeDTW for sequence alignment and time series classification extensively on 84 UCR time series datasets [6] and the Bach10 dataset [9]. For sequence alignment, we compare shapeDTW against DTW and its other variants both qualitatively and quantitatively: specifically, we first visually compare alignment results returned by shapeDTW and DTW (and its variants), and then quantify their alignment path qualities on both synthetic and real data. Concretely, we simulate aligned pairs by artificially scaling and stretching original time series sequences, align those pairs by shapeDTW and DTW (and its variants), and then evaluate the alignment paths against the ground-truth alignments. We further evaluate the alignment performances of shapeDTW and DTW (and its variants) on audio signals, which have the groundtruth point-to-point alignments. For time series classification, since it is widely recognized that the nearest neighbor classifier with the distance measure DTW (NN-DTW) is very effective and is hard to beaten [2], [34], we use the nearest neighbor classifier as well to test the effectiveness of shapeDTW (NN-shapeDTW), and compare NN-shapeDTW against NN-DTW. We further compare NN-shapeDTW against six other state-of-the-art classification algorithms in the supplementary materials. Sequence alignment We evaluate sequence alignments qualitatively in Sec. 6.1.2 and quantitatively in Sec. 6.1.3 and Sec. 6.1.4. We compare shapeDTW against DTW, derivative Dynamic Time Warping (dDTW) [19] and weighted Dynamic Time Warping (wDTW) [17]. dDTW first computes derivative sequences, and then aligns them by DTW. wDTW uses a weighted 2 distance, instead of the regular 2 distance, to compute distances between points, and the weight accounts for the phase differences between points. wDTW is essentially a DTW algorithm. Here, both dDTW and wDTW are variants of the original DTW. Before the evaluation, we briefly introduce some popular step patterns in DTW. Step pattern in DTW Step pattern in DTW defines the allowed transitions between matched pairs, and the corresponding weights. In both Program. 2 (DTW) and Program. 3 (shapeDTW), we use the default step pattern, whose recursion formula is D(i, j) = d(i, j) + min{D(i − Step-pattern (a) "symmetric1" is the default step pattern for DTW and (b) gives more penalties to the diagonal directions, such that the warping favors stair-stepping paths. Step patterns (a) and (b) obtain a continuous warping path, while step patterns (c), (d) and (e) may result in skipping elements, i.e., some temporal points from one sequence are not matched to any points from the other sequence, and vice verse. Qualitative alignment assessment We plot alignment results by shapeDTW and DTW/dDTW, and evaluate them visually. shapeDTW under 5 shape descriptors, Raw-Subsequence, PAA, DWT, Derivative and HOG1D, obtains similar alignment results, here we choose Derivative as a representative to report results, with the subsequence length set to be 30. Here, shapeDTW, DTW and dDTW all use step pattern (a) in Fig. 4. Time series with rich local features: time series with rich local features, such as those in the "OSUleaf" dataset (bottom row in Fig.5), have many bumps and valleys; DTW becomes quite brittle to align such sequences, since it matches two points based on their single-point y-magnitudes. Because single magnitude value does not incorporate local neighborhood information, it is hard for DTW to discriminate a peak point p from a valley point v with the same magnitude, although p and v have dramatically different local shapes. dDTW bears similar weakness as DTW, since it matches points bases on their derivative differences and does not take local neighborhood into consideration either. On the contrary, shapeDTW distinguishes peaks from valleys easily by their highly different local shape descriptors. Since shapeDTW takes both non-linear warping and local shapes into account, it gives more perceptually interpretable and semantically sensible alignments than DTW (dDTW). Some typical alignment results of time series from feature rich datasets "OSUleaf" and "Fish" are shown in Fig.5. Simulated sequence-pair alignment We simulate aligned sequence pairs by scaling and stretching original time series. Then we run shapeDTW and DTW (and its variants) to align the simulated pairs, and compare their alignment paths against the ground-truth. In this section, shapeDTW is run under the fixed settings: (1) fix the subsequence length to be 30, (2) use Derivative as the shape descriptor and (3) use "symmetric1" as the step-pattern. One caveat we have to pay attention to is that: scaling an input time series by a random scale vector can make the resulting time series perceptually quite different from the original one, such that simulated alignment pairs make little sense. Therefore, in practice, a scale vector S should be smooth, i.e., adjacent elements in S cannot be random, instead, they should be similar in magnitude, making adjacent temporal points from the original time series be scaled by a similar amount. In experiments, we first use a random process, which is similar to Brownian motion, to initialize scale vectors, and then recursively smooth it. The scale vector generation algorithm is shown in Alg. 2. As seen, adjacent scales are initialized to be differed by at most 1 (i.e., s(t + 1) = s(t) + sin (π × randn)), such that the first order derivatives are bounded and initialized scale vectors do not change abruptly. Initialized scale vectors usually have local bumps, and we further recursively utilize cumulative summation and sinesquashing, as described in the algorithm, to smooth the scale vectors. Finally, the smoothed scale vectors are linearly squashed into a positive range [a b]. After non-uniformly scaling an input time series by a scale vector, we obtain a scale-transformed new sequence, and then we randomly pick α percent of points from the new sequence and stretch each of them by some random amount τ . Stretching at point p by some amount τ is to duplicate p by τ times. Aligned-pairs simulation : using training data from each UCR dataset as the original time series, we simulate their alignment pairs by running Alg. 2. Since there are 27,136 training time series instances from 84 UCR datasets, we simulate 27,136 aligned-pairs Alignment quality comparison between shapeDTW and DTW/dDTW/wDTW, under the step pattern "symmetric1". As seen, as the stretching amount increases, the alignment qualities of both shapeDTW and DTW/dDTW/wDTW drop. However, shapeDTW consistently achieves lower alignment errors under different stretching amounts, compared with DTW, dDTW and wDTW. parameter-free, but wDTW has one tuning parameter g (see Eq. (3) in their paper), which controls the curvature of the logistic weight function. However in the case of aligning two sequences, g is impossible to be tuned and should be pre-defined by experiences. Here we fix g to be 0.1, which is the approximate mean value of the optimal g in the original paper. For the purpose of comparing the alignment qualities of different algorithms, we use the default step pattern, (a) in Fig. 4, for both shapeDTW and DTW/dDTW/wDTW, but we further evaluate effects of different step-patterns in the following experiments. We simulate alignment pairs by stretching raw time series by different amounts, 10%, 20%, 30%, 40% and 50%, and report the alignment qualities of shapeDTW and DTW/dDTW/wDTW under each stretching amount in terms of the mean of "Mean Absolute Deivation" scores over 27,136 simulated pairs. The results are shown in Fig. 7, which shows shapeDTW achieves lower alignment errors than DTW / dDTW / wDTW over different stretching amounts consistently. shapeDTW almost halves the alignment errors achieved by dDTW, although dDTW already outperforms its two competitors, DTW and wDTW, by a large margin. Effects of different step patterns : choosing a suitable step pattern is a traditionally way to improve sequence alignments, and it usually needs domain knowledge to make the right choice. Here, instead of choosing an optimal step pattern, we run DTW/dDTW/wDTW under all 5 step patterns in Fig. 4 and compare their alignment performances against shapeDTW. Similar as the above experiments, we simulate aligned-pairs under different amounts of stretches, report alignment errors under different step patterns in terms of the mean of "Mean Absolute Deivation" scores over 27,136 simulated pairs, and plot the results in Fig. 8. As seen, different step patterns obtain different alignment qualities, and in our case, step patterns, "symmetric1" and "asymmetric", have similar alignment performances and they reach lower alignment errors than the other 3 step patterns. However, shapeDTW still wins DTW/dDTW/wDTW (under "symmetric1" and "asymmetric" step-patterns) by some margin. From the above simulation experiments, we observe dDTW (under the step patterns "symmetric1" and "asymmetric") has the closest performance as shapeDTW. Here we simulate aligned-pairs with on average 30% stretches, run dDTW (under "symmetric1" step pattern) and shapeDTW alignments, and report the "Mean Absolute Deviation" scores in Table 1. shapeDTW has lower "Mean Absolute Deivation" scores on 56 datasets, and the mean of "Mean Absolute Deivation" on 84 datasets of shapeDTW and dDTW are 1.68/2.75 respectively, indicating shapeDTW achieves much lower alignment errors. This shows a clear superiority of shapeDTW to dDTW for sequence alignment. The key difference between shapeDTW and DTW/dDTW/wDTW is that whether neighborhood is taken into account when measuring similarities between two points. We demonstrate that taking local neighborhood information into account (shapeDTW) does benefit the alignment. Notes: before running shapeDTW and DTW variants alignment, two sequences in a simulated pair are z-normalized in advance; when computing "Mean Absolute Deviation", we choose the original time series as the reference sequence, i.e., divide the area between two alignment paths by the length of the original time series. Fig. 8. Align sequences under different step patterns. We align sequence-pairs by DTW/dDTW/wDTW under 5 different step patterns (Fig. 4), "symmetric1", "symmetric2", "symmetric5", "asymmetric" and "rabinerJuang", and compare their alignment errors against those obtained by shapeDTW. As seen, different step patterns usually reach different alignment results, which shows the importance of choosing an appropriate step pattern adapted to the application domain. In our case, "asymmetric" step pattern achieves slightly lower errors than "symmetric1" step pattern (under DTW, wDTW and dDTW), however, shapeDTW consistently wins DTW/dDTW/wDTW under the best step pattern -"asymmetric". TABLE 1 Alignment errors of shapeDTW vs dDTW. We use training data from each UCR dataset as the original time series, and simulate alignment pairs by scaling and streching the original time series (stretched by 30%). Then we run shapeDTW and dDTW to align these synthesized alignment pairs, and evaluate the alignment paths against the ground-truth by computing "Mean Absolute Deviation" scores. The mean and standard deviation of the "Mean Absolute Deviation" scores on each dataset is documented, with smaller means and stds in bold font. shapeDTW achieves lower "Mean Absolute Deviation" scores than dDTW on 56 datasets, showing its clear advantage for time series alignment. the converted audio waveform from the MIDI score of the Chorale '05-DieNacht' and its corresponding 5D MFCCs features; (b) alignment paths: align two MFCCs sequences by DTW, dDTW and shapeDTW, and the plot shows their alignment paths, together with the ground-truth alignment. As seen, the alignment paths of dDTW and shapeDTW are closer to the ground-truth than that of DTW. (c) "Mean Absolute Deviation" from the ground truth alignment: on 9 (10) out of 10 chorales, shapeDTW achieves smaller alignment errors than dDTW (DTW), showing that shapeDTW outperforms DTW/dDTW to align real sequence pairs as well. MIDI-to-audio alignment We showed the superiority of shapeDTW to align synthesized alignment pairs, and in this section, we further empirically demonstrate its effectiveness to align audio signals, which have groundtruth alignments. The Bach10 dataset [9] consists of audio recordings of 10 pieces of Bach's Chorales, as well as their MIDI scores and the ground-truth alignment between the audio and the MIDI score. MIDI scores are symbolic representations of audio files, and by aligning symbolic MIDI scores with audio recordings, we can do musical information retrieval from MIDI input-data [16]. Many previous work used DTW to align MIDI to audio sequences [9], [12], [16], and they typically converted MIDI data into audios as a first step, and the problem boils down to audioto-audio alignment, which is then solved by DTW. We follow this convention to convert MIDI to audio first, but run shapeDTW instead for alignments. Each piece of music is approximately 30 seconds long, and in experiments, we segment both the audio and the converted audio from MIDI data into frames of 46ms length with a hopsize of 23ms, extract features from each 46ms frame window, and in this way, the audio is represented as a multivariate time series with the length equal to the number of frames and dimension equal to the feature dimensions. There are many potential choices of frame features, but how to select and combine features in an optimal way to improve the alignment is beyond the scope of this paper, we refer the interested readers to [12], [21]. Without loss of generality, we use Mel-frequency cepstral coefficients (MFCCs) as features, due to its common usage and good performance in speech recognition and musical information retrieval [26]. In our experiments, we use the first 5 MFCCs coefficients. After MIDI-to-audio conversion and MFCCs feature extraction, MIDI files and audio recordings are represented as 5dimensional multivariate time series, with approximately length L ≈ 1300. A typical audio signal, MIDI-converted audio signal, and their 5D MFCCs features are shown in Fig. 9. We align 5D MFCCs sequences by shapeDTW: although shapeDTW is designed for univariate time series alignments, it naturally extends to multivariate cases: first extract a subsequence from each temporal point, then encode subsequences by shape descriptors, and in this way, the raw multivariate time series is converted to a descriptor sequence. In the multivariate time series case, each extracted subsequence is multi-dimensional, having the same dimension as the raw time series, and to compute the shape descriptor of a multi-dimensional subsequence, we compute shape descriptors of each dimension independently, concatenate all shape descriptors, and use it as the shape representation of that subsequence. We compare alignments by shapeDTW against DTW/dDTW, and all of them use the "symmetric1" step pattern. The length of subsequences in shapeDTW is fixed to be 20 (we tried 5,10, 30 as well and achieved quite similar results), and Derivative is used as the shape descriptor. The alignment qualities in terms of "Mean Absolute Deviation" on 10 Chorales are plotted in Fig. 9. To be consistent with the convention in the audio community, we actually report the mean-delayed-second between the alignment paths and the ground-truth. The mean-delayed-second is computed as: dividing "Mean Absolute Deviation" by the sampling rate of the audio signal. shapeDTW outperforms dDTW/DTW on 9/10 MIDI-to-audio alignments. This shows taking local neighborhood information into account does benefit the alignment. shows shapeDTW under all descriptors performs significantly better than DTW. Raw-Subsequence (PAA and DWT as well) outperforms DTW on more datasets than HOG1D does, but HOG1D achieves large accuracy improvements on more datasets, concretely, HOG1D boosts accuracies by more than 10% on 18 datasets, compared with on 12 datasets by Raw-Subsequence. Time series classification We compare NN-shapeDTW with NN-DTW on 84 UCR time series datasets for classification. Since these datasets have standard partitions of training and test data, we experiment with these given partitions and report classification accuracies on the test data. In the above section, we explore the influence of different steps patterns, but here both DTW and shapeDTW use the widely adopted step pattern "symmetric1" (Fig. 4 (a)) under no temporal window constraints to align sequences. NN-DTW: each test time series is compared against the training set, and the label of the training time series with the minimal DTW distance to that test time series determines the predicted label. All training and testing time series are z-normalized in advance. shapeDTW: we test all 5 shape descriptors. We z-normalize time series in advance, sample subsequences from the time series, and compute 3 magnitude-aware shape descriptors, Raw-Subsequence, PAA and DWT, and 2 y-shift invariant shape descriptors, Slope and HOG1D. Parameter setting for 5 shape descriptors: (1) The length of subsequences to be sampled around temporal points is fixed to 30, as a result Raw-Subsequence descriptor is a 30D vector; (2) PAA and Slope uses 5 equal-lengthed intervals, therefore they have the dimensionality 5; (3) As mentioned, HOG1D uses 8 bins and 2 non-overlapping intervals, and the scale factor σ is fixed to be 0.1. At last HOG1D is a 16D vector representation. NN-shapeDTW: first transform each training/testing time series to a shape descriptor sequence, and in this way, original univariate time series are converted into multivariate descriptor time series. Then apply NN-DTW on the multivariate time series to predict labels. NN-shapeDTW vs. NN-DTW: we compare NN-shapeDTW, under 4 shape descriptors Raw-Subsequence, PAA, DWT and HOG1D, with NN-DTW, and plot their classification accuracies on 84 datasets in Fig.10. shapeDTW outperforms (including ties) DTW on 64/63/64/61 (Raw-Subsequence/PAA/DWT/HOG1D) datasets, and by running the Wilcoxon signed rank test between performances of NN-shapeDTW and NN-DTW, we obtain pvalues 5.5 · 10 −8 /5.1 · 10 −7 /4.8 · 10 −8 /1.7 · 10 −6 , showing that shapeDTW under all 4 descriptors performs significantly better than DTW. Compared with DTW, shapeDTW has a preceding shape descriptor extraction process, and approximately takes time O(l·L), where l and L is the length of subsequence and time series respectively. Since generally l L, the total time complexity of shapeDTW is O(L 2 ), which is the same as DTW. By trading off a slight amount of time and space, shapeDTW brings large accuracy gains. Since PAA and DWT are approximations of Raw-Subsequence, and they have similar performances as Raw-Subsequence under the nearest classifier, we choose Raw-Subsequence as a representative for following analysis. Shape descriptor Raw-Subsequence loses on 20 datasets, on 18 of which it has minor losses (< 4%), and on the other 2 datasets, "Computers" and "Synthetic-control", it loses by 10% and 6.6%. Time series instances from these 2 datasets either have high-frequency spikes or have many abrupt direction changes, making them resemble noisy signals very much. Possibly, comparing the similarity of two points using their noisy neighborhoods is not as good as using their single coordinate values (DTW), since temporal neighborhood may accumulate and magnify noise. HOG1D loses on 23 datasets, on 18 of which it has minor losses (< 5%), and on the other 5 datasets, "CBF", "Computers", "ItalyPowerDemand", "Synthetic-control" and "Wine", it loses by 7.7%, 5.6%, 5.3%, 14% and 11%. By visually inspecting, time series from "Computers", "CBF" and "Synthetic-control" are spiky and bumpy, making them highly non-smooth. This makes the first-order-derivative based descriptor HOG1D inappropriate to represent local structures. Time series instances from 'Italy-PowerDemand' have length 24, while we sample subsequences of length 30 from each point, this makes HOG1D descriptors from different local points almost the same, such that HOG1D becomes not discriminative of local structures. This makes shapeDTW inferior to DTW. Although HOG1D loses on more datasets than Raw-Subsequence, HOG1D boosts accuracies by more than 10% on 18 datasets, compared with on 12 datasets by Raw-Subsequence. On datasets "OSUleaf" and "BirdChicken", the accuracy gain is as high as 27% and 20%. By checking these two datasets closely, we find different classes have membership-discriminative local patterns (a.k.a shapelets [35]), however, these patterns differ only slightly among classes. Raw-Subsequence shape descriptor can not capture these minor differences well, while HOG1D is more sensitive to shape variations since it calculates derivatives. Both Raw-Subsequence and HOG1D bring significant accuracy gains, however, they boost accuracies to different extents on the same dataset. This indicates the importance of designing domain-specific shape descriptors. Nevertheless, we show that even by using simple and dataset-independent shape descriptors, we still obtain significant improvements over DTW. Classification error rates of DTW, Raw-Subsequence and HOG1D on 84 datasets are documented in Table.2. Superiority of Compound shape descriptors: as mentioned in Sec.4, a compound shape descriptor obtained by fusing two complementary descriptors may inherit benefits from both descriptors, and becomes even more discriminative of subsequences. As an example, we concatenate a y-shift invariance descriptor HOG1D and a magnitude-aware descriptor DWT using equal weights, resulting in a compound descriptor HOG1D + DWT = (HOG1D, DWT). Then we evaluate classification performances of 3 descriptors under the nearest neighbor classifier, and plot the comparisons in Fig.11. HOG1D+DWT outperforms (including ties) HOG1D / DWT on 66/51 (out of 84) datasets, and by running the Wilcoxon signed rank hypothesis test between performances of HOG1D+DWT and HOG1D (DWT), we get pvalues 5.5 · 10 −5 /0.0034, showing the compound descriptor outperforms individual descriptors significantly under the confidence level 5%. We can generate compound descriptors by weighted concatenation, with weights tuned by cross-validation on training data, but this is beyond the scope of this paper. Texas Sharpshooter plot: although NN-shapeDTW performs better than NN-DTW, knowing this is not useful unless we can tell in advance on which problems it will be more accurate, as stated in [3]. Here we use the Texas sharpshooter plot [3] to show when NN-shapeDTW has superior performance on the test set as predicted from performance on the training set, compared with NN-DTW. We run leave-one-out cross validation on training data to measure the accuracies of NN-shapeDTW and NN-DTW, and we calculate the expected gain: accuracy(NN-shapeDTW)/accuracy(NN-DTW). We then measure the actual accuracy gain using the test data. The Texas Sharpshooter plots between Raw-Subsequence/HOG1D and DTW on 84 datasets are shown in Fig.12. 87%/86% points (Raw-Subsequence/HOG1D) fall in the TP and TN regions, which means we can confidently predict that our algorithm will be superior/inferior to NNDTW. There are respectively 7/7 points falling inside the FP region for descriptors Raw-Subsequence/HOG1D, but they just represent minor losses, i.e., actual accuracy gains lie within [0.9 1.0]. Sensitivity to the size of neighborhood In the above experiments, we showed that shapeDTW outperforms DTW both qualitatively and quantitatively. But we are still left There are 87%/86% points (Raw-Subsequence/HOG1D vs. DTW) falling in the TP and TN regions, which indicates we can confidently predict that our algorithm will be superior/inferior to NNDTW. with one free-parameter: the size of neighborhood, i.e., the length of the subsequence to be sampled from each point. Let t i be some temporal point on the time series T ∈ R L , and s i be the subsequence sampled at t i . When |s i | = 1, shapeDTW (under the Raw-Subsequence shape descriptor) degenerates to DTW; when |s i | = L, subsequences sampled at different points become almost identical, make points un-identifiable by their shape descriptors. This shows the importance to set an appropriate subsequence length. However, without dataset-specific domain knowledge, it is hard to determine the length intelligently. Here instead, we explore the sensitivity of the classification accuracies to different subsequence lengths. We conduct experiments on 42 old UCR datasets. We use Raw-Subsequence as the shape descriptor, and NN-shapeDTW as the classifier. We let the length of subsequences to vary from 5 to 100, with stride 5, i.e., we repeat classification experiments on each dataset for 20 times, and each time set the length of subsequences to be 5 × i, where i is the index of experiments (1 ≤ i ≤ 20, i ∈ Z). The test accuracies under 20 experiments are shown by a box plot ( Fig.13). On 33 out of 42 datasets, even the worst performances of NN-shapeDTW are better than DTW, indicating shapeDTW performs well under wide ranges of neighborhood sizes. CONCLUSION We have proposed an new temporal sequence alignment algorithm, shapeDTW, which achieves quantitatively better alignments than DTW and its variants. shapeDTW is a quite generic framework as well, and uses can design their own local subsequence descriptor and fit it into shapeDTW. We experimentally showed that shapeDTW under the nearest neighbor classifier obtains significantly improved classification accuracies than NN-DTW. Therefore, NN-shapeDTW sets a new accuracy baseline for further comparison. ACKNOWLEDGMENTS This work was supported by the National Science Foundation (grant number CCF-1317433), the Office of Naval Research (N00014-13-1-0563) and the Army Research Office (W911NF-11-1-0046 and W911NF-12-1-0433). The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof. TABLE 2 Error rates of NN-DTW and NN-shapeDTW (under descriptors Raw-Subsequence and HOG1D) on 84 UCR datasets. The error rates on datasets where NN-shapeDTW outperforms NN-DTW are highlighted in bold font. Underscored datasets are those on which shapeDTW has improved the accuracies by more than 10%.
2016-06-06T02:38:01.000Z
2016-06-06T00:00:00.000
{ "year": 2016, "sha1": "8406b0d7bef8ffa6e3e9081819da29c193382cd1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1606.01601", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c9ab340b6d2ab21092beb3411eb5fb1849caa24d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
249359432
pes2o/s2orc
v3-fos-license
Retraction Notice: RETRACTED: “Impacts on Global Temperature During the First Part of 2020 Due to the Reduction in Human Activities by COVID-19” One of the major events transpiring in the 21st century is the unforeseen outbreak due to COVID-19. This pandemic directly altered human activities due to the forced confinement of millions of inhabitants over the world. It is well known that one of the main factors that affect global warming is human activities; however, during the first part of 2020, they were severely reduced by the spread of the coronavirus. This study strives to investigate the possible impact of quarantine initiation worldwide and the linked outcomes on a global scale related to the temperatures since the worthwhile. To achieve this goal, the evaluation of the short-term temperature status at the continental scale was conducted in two particular forms: (i) concerning the short-term comparing the data from 2016, 2017, 2018, and 2019; and, assessing the long-term differences comprising 30 years of data (1981–2010). The data employed in this study were obtained from the respective NASA and Copernicus databases. The temperature maps and temperature differences of different years before the pandemic was compared to the Coronavirus onset (winter and spring) data with the aid of Python programing language. Continental temperature mapping results showed that the temperature difference of the American continent had attained its maximum value in January 2016, and yet, the temperature is observed to be warmer than in 2016. The largest difference in the short-term temperature in terms of comparison to 2020 referred to the months when the maximum quarantine began, that is, February and March, and the temperature was cooler in comparison to the prior years. The long-term mean study denoted that the temperatures throughout the South American continent remained consistent during the first part of 2020 in comparison to the 30-year average data, but temperatures in North America declined from February to April. Similarly, the temperatures in Eurasia in April is observed to be lower compared to the 30 years average in February and March. Accordingly, the average temperature of the Earth has dropped about 0.3°C compared to 2019. We concluded that temperature could show some specific changes and hypothesize that under the COVID-19 pandemic, it could manifest different trends. The next step would be to conduct further analysis to observe at the regional scale if under unforeseen phenomena are or not affecting global warning during the coming years. Introduction It is possible that each century, a virus pandemic rages worldwide in general, but in 2019, an unknown, novel, and atypical virus with symptoms similar to pneumonia broke out in Wuhan, China (El Zowalaty & Järhult, 2020;Wu et al., 2020). The results of preliminary studies revealed that the COVID-19 virus reportedly shared an identical receptor, ACE2 (Angiotensin-converting enzyme 2), with the Severe Acute Respiratory Syndrome coronavirus (SARS-COV) (Zaki et al., 2012). The World Health Organization (WHO) established an international committee to oversee the disease, and on February 11/2020, named the condition as COVID-19 (Coronavirus Disease) (Chang et al., 2020;Zhao et al., 2020). This disease, which is spreading out rapidly worldwide, maintains many differences with other viruses that have been recognized previously. For instance, COVID-19 renders quick transmission and asymptomaticity among infected individuals as particular features. The number of COVID-19 infection cases has reached more than nine million cases worldwide within the period from January to June 2020, the statistics are still increasing (Gorbalenya et al., 2020;World Health Organization, 2020). The World Health Organization (WHO) resolved to control the COVID-19 pandemic by implementing quarantine measures worldwide due to the lack of vaccines and efficient control measures to halt the virus (Mehta et al., 2020;Palayew et al., 2020). Consequently, many countries relented to reduce all production and transportation ventures and other social, economic, political, etc. activities to zero (Ashraf, 2020;Muscogiuri et al., 2020;Piguillem & Shi, 2020;Sjödin et al., 2020). The viral diseases could not be observed in the changes concerning the world environment in the past since the industrial ventures had yet to be initiated in the world. Yet, the cessation of production and human activities has reduced the consumption of energy, fossil fuels, effluents, and decreased pollutions worldwide similarly since all human activities in the 21st century are associated with the industrial activities 2 Air, Soil and Water Research (Oldekop et al., 2020;Steffen et al., 2020;You et al., 2020). Cessation of industrial activities can render a direct impact on ecosystems, remarkably on global climate change (Danovaro et al., 2020;McMichael, 2003;O'Brien & Leichenko, 2000;Shi et al., 2015). Le Quéré et al. (2020) estimated that daily global CO 2 emissions could have been reduced by −17% by early April 2020 compared with the mean 2019 levels and some peaks in specific countries by −26% on average. Therefore, one of the Sustainable Development Goals for 2030 is to examine the impact of climate-related on this pandemic because climate action is considered the 13th objective of the Sustainable Development Goals (https://www.un.org/sustainabledevelopment/climate-change/). Additionally, another objective of Sustainable Development Goals is their third objective, which is related to human health and well-being, and the coronavirus directly impacts the discussed human well-being aspects while rendering an indirect impact on greenhouse gases and climate (Mandal & Pal, 2020;Wang & Su, 2020). It is well-known that any change in climatic conditions could directly impact the survival of natural ecosystems (Malhi et al., 2020) or modify human settlements (Živković, 2019) on Earth. Reducing pollution in the environment can positively affect the reduction of greenhouse gases, and subsequently altering the global temperature (Chakraborty & Maity, 2020;Diffenbaugh et al., 2020;Zambrano-Monserrate et al., 2020). Based on the conclusions of Rosenbloom and Markard's (2020) study on the impacts of the COVID-19 pandemic on air pollution development and particularly greenhouse gases, it was revealed that the production of greenhouse gases could be dramatically declined following the COVID-19 pandemic and global closure of industrial factories. Moreover, the valid role of human activities on specific climate parameters could have become more evident with the outbreak of COVID-19, but the extent of these changes at the national and international scales is still unknown and requires to be further investigated (Rosenbloom & Markard, 2020 Watts et al., 2018). Yet, maintaining recent, historical, and future information on temperature changes can assist in better management of other aspects within other connected Earth's spheres such as the pedosphere (Brevik et al., 2020;Rodrigo-Comino et al., 2018). This information should be further investigated on a global and regional scale, respectively Caesar et al., 2006). Some events occurring during world history can have an unforeseen impact on the Earth's temperature, but accommodating this information would serve in predicting and analyzing the coming trends on the Earth's surface. The data is often estimated locally by researchers given that the climate data assembled from the Earth's surface is quite extensive, but this information does fail to signify the relevance of this issue (Office of the Leading Group for Promoting the Belt and Road Initiative, 2019). The results showed that the surface air temperature which the coronavirus 2019 (COVID-19) outbreak decreased by 0.05°C in commercial areas of the city in Osaka, Japan (Nakajima et al., 2021). And also, to evaluate the effect of suppressed human activities on temperature in the Tokyo Metropolitan area, a research made for temperature. The result show that the temperature in Tokyo ranges of ±0.19°C on the average over the strong self-restraint period from April to May (Fujibe, 2020). The effect of suppressed human activities on temperature show that decrease of up to 1°C in the surface temperature for regions city (Ali et al., 2021;Potter & Alexander, 2021;Teufel et al., 2021). Accordingly, the main aim of this study was to investigate possible differences in global warming due to the occurrence of an unforeseen event caused, the Coronavirus (COVID-19) pandemic. The Coronavirus (COVID-19) pandemic has led to the closure or temporary cessation of countless human activities worldwide, and despite human factors being a significant determinant in climate change, not enough research has been conducted to address the issue on a global scale. Furthermore, we evaluate the impact of the Coronavirus pandemic on different global warming scenarios considering the short-and longterm global temperatures. Concerning the short-term periods, we compared the data from 2016, 2017, 2018, and 2019; and, for assessing the long-term differences, 30 years of data . The results of this study could serve to illustrate a possible indicator and adverse consequences of the COVID-19 pandemic worldwide at the continental scale. Materials and Methods The total available land of the Earth was examined in the present study. Accordingly, the surface of all seas, oceans, and lakes was separated from the land surface, and only the land surface temperature (continental lands with north and south poles) was assessed. Hence, the mask method was executed on all maps prepared in Linux and Python environments to determine the subject area ( Figure 1). The framework employed for drawing the temperature map is included in Figure 2. We showed that the preparation of daily average land temperature data was made using synoptic stations, conversion of daily average temperature data to monthly data, training sample generation, zoning of data on the world map, classification, accuracy assessment, and finally, performing regional classifications and evaluations of the obtained results. We registered the raw temperature data within the software and then performed the necessary analyzes. Data availability Synoptic station data were gathered across the world and subsequently recorded, collected, and transferred to the Global Meteorological Database. Firstly, NOAA (National Oceanic and Atmospheric Administration) is one of the employed daily temperature databases (http://dx.doi.org/10.7289/ V5D21VHZ). Then, the Copernicus database (https://cds.climate.copernicus.eu/cdsapp#!/home) was similarly used to examine the 30-year temperature data. Data is available for all countries in each continent. Methodology The present study was divided into two different seasons in 2020, namely winter and spring, as the onset of the COVID-19 pandemic occurred in winter ( January and February) and spring (March and April), and the onset has been ensuing ever since. The daily GRIB 1 format data was converted to NCL. Following, the daily data was converted into monthly data (See Appendix), and then the temperature maps pertaining to 2016, 2017, 2018, 2019, and 2020 were plotted to render a short-term comparison of COVID-19 impacts on the surface temperature changes. Ultimately, the temperature changes occurring in different months of 2016, 2017, 2018, 2019, and 2020 were examined ( Figure 2). The mathematical model for drawing short-term temperature differences is presented in equation (1). (1) In this formula, T-AV month is the average monthly temperature, and T means the average daily temperature registered in synoptic stations, referred to each month. On the other hand, the 30-year averages for January, February, March, and April were prepared via monthly data to examine the possible long-term climate change differences occurring 1981-2010 and COVID-19 pandemic-induced temperature data changes. Further, these averages were analyzed considering the temperature data obtained in 2020 ( Figure 2). The mathematical model for plotting a long-term temperature difference is presented in Equation 2. (2) T-AV year is the average monthly temperature, and M represents the average monthly temperature referred to each month. Softwares and code availability Python software and Linux environment were employed to draw the temperature maps. All GRIB data were analyzed in NC format in the NCL environment and Python. The steps of drawing the data included format conversion, processing reading the data by Python software, plotting, converting the temperature unit from Kelvin to Celsius, saving, and outputting the data. The Python code used for the analysis is available upon request. Additionally, some of the codes relevant to the Python software are included in Supplemental Material 1. Short-term assessments of global temperatures The results of the study concerning the different land surface temperatures during the last 4 years (2016, 2017, 2018, and 2019) and 2020, and the onset of the COVID-19 pandemic are displayed in Figure 3. January temperature difference between 2016 and 2020 revealed that the Eurasian continent (Europe and Asia) maintained the highest temperature this year, to the extent that the temperature difference in this period reached more than 15 degrees. Furthermore, the 2016 temperature in Antarctica was higher than in 2017, 2018, and 2019, while the North American continent presents a temperature difference above zero compared to January 2020. In the case of Oceania, the results showed that the temperature difference between Air, Soil and Water Research 2020 and 2019 is remarkable to the extent that the continent is progressing toward the reduction in temperatures, and is colder compared to the past 4 years. The land surface temperature difference in Africa in 2016, 2017, and 2018 is progressing from unchanged to increase. Also, the largest temperature difference in terms of comparison to January 2020 refers to 2016. The results of February land surface temperature differences between 2016, 2017, 2018, and 2019 compared to 2020 revealed that the temperature manifested two distinct behaviors in Eurasia, and according to this, the temperature in February 2016 was higher in the eastern regions of the continent. The temperature has been annually rising toward the western regions of the continent (temperature differences amount to more than 12°). The February temperatures have been further rising in Eurasia since 2016 onwards, but the same temperatures have encountered a decrease in the majority of the continent as the COVID-19 pandemic began, and production restrictions along with global quarantine measures were implemented (Figure 3). The February temperature survey in Oceania among different years (2016, 2017, 2018, and 2019) determined that the temperature is higher compared to 2020, and this difference is progressing every year. Contrarily, the results of the temperature difference revealed that the temperature has dropped throughout the content at the same during The results of Christidis et al. (2020) on temperature changes in Europe determined that the temperature in 2018 has reached the highest levels observed in the last century, which is consistent with the results of this study. They also directly linked this outcome to the increased human activity, as the results of this study similarly showed that the cessation of industrial activities and the implementation of quarantine measures have reduced human activity, and subsequently could provoke temperature changes worldwide. Asian, European, and African countries are recognized for having vulnerable climates to the extent that any temperature change will inflict the greatest impact on their respective water resources and environmental pursuits . According to the results of this study, the temperature has gradually increased since 2016 until 2018, and the reports of polar glacier meltdowns in Russian regions with a speed of 25 m per day (m/day) has been estimated during these years (Willis et al., 2018). However, the results of comparing the March temperature differences observed in 2016 and 2017 to 2020 indicated that the temperatures were lower on all continents to the extent that Greenland's March temperature in 2016 depicts approximately −20°C worth of temperature difference compared to 2020. March temperature analysis confirmed that the temperatures reached the highest value in 2018, while this temperature had reached approximately 15°C in Asian regions, depicting a higher temperature than 2020. As the results indicated, the changes in this month have undergone a decrease due to the implementation of maximum production cessation laws worldwide, and the March 2020 temperature has decreased compared to prior years. 7 A prior study conducted on the daily temperature of the Earth's surface aimed to investigate the temperature differences observed in the 1979 to 2018 period. The results of this study reported an increase in temperature amounting to approximately 0.5°C, with a maximum temperature of 40°C and a minimum one of 20°C during the day. Accordingly, this factor was further reported to directly impacts economic ventures worldwide (Yang & Zhang, 2020). The April temperature difference is quite different from January, February, and March, and accordingly, the temperature on the entire surface of the earth has undergone a sharp decrease to the extent that all continents have displayed decreased temperatures in the rest of the years given the high temperatures observed in April 2020. As the measures restricting the activity of associations and organizations intensified in most countries, particularly in the United States and Europe, it directly influenced the temperature, and the temperature differences revealed that the Earth's land surface temperature had decreased, causing the Americas' temperatures to drop by approximately −8 to −10°C in April compared to the past year, 2019. Temperature changes in Europe showed that temperature changes resulting from human activities are the most prevalent factor in climate change. Parallel with this, the temperature in summer 2018 reached the highest record value in Europe, which was the aftermath of a 30% increase in human activities in the same region (McCarthy et al., 2019;Vautard et al., 2019). Investigating the temperature changes in the UK employing a more extensive region data confirmed that the limitations of local-scale studies have not always been appropriate for prediction due to local effects, thus, suggesting to employ smaller scales to predict temperature changes (Christidis et al., 2020). The results of daily data analysis in the UK attested that the temperature had warmed by approximately 1°C, and this trend is still increasing, with all models displaying an increase in 2019 temperature. In this study, 16 respective climate models were studied to predict temperature changes in the United Kingdom, and two categories of natural and human activities were further taken into account. The most relevant human activities concerning the temperature changes are changes in greenhouse gases arising from the factories, aerosols, ozone, and land use, whereas the natural impacts concerning the temperature changes are solar activities and volcanic aerosol emissions (Christidis et al., 2020 ). The results of this study likewise confirmed that the temperature had decreased in comparison to the average of 2020 winter and spring months, which is consistent with the decrease in human activities (possible implementation of quarantine measures worldwide). Linear models rendered more accuracy for examining temperature changes than other models such as the HadUK-Grid. Comparison of 2020 temperature with the long-term average datasets (30-years average temperature) The results of the temperature difference comparison between 2020 winter and spring months and the average of the same months in the 30 years are displayed in Figure 4. The results of comparing the January 2020 land surface temperature differences with the long-term average (30-year average) revealed that the temperature manifested two different trends in North America. Accordingly, the January 2020 temperature was higher in the eastern regions of the continent while the temperature in the western regions of the same continent appeared below the average temperature referred to in 30 years. Contrarily in South America, the temperature has prevailed unchanged from the average temperature of 30-years. The results further indicate that the central regions of Eurasia are warmer compared to the 30-year average temperature, but the respective northern and southern regions remain moderately unchanged. Moreover, Northern Australia maintains a warmer temperature, whereas Western Australia is cooler compared to the average temperature of 30-years in this country. The temperature changes could not be particularly severe in Asia since the lockdown of countries was originated from Asia in January. The results of the February temperature difference comparison between Earth's land surface and the 30-year average revealed that the February 2020 temperature increase in Eurasia was higher in the Northern regions of the continent than the South, reaching approximately 20°C and even extending to the Northern regions of the African continent. These month's temperatures in the Australian continent also displayed a radically different behavior compared to January. According to this difference, the temperature increased in general, yet a temperature shift from the east to west is observed, unlike January. The February temperature survey in the Americas also observed that temperatures had progressed toward a decrease, but an increase transpired between 0°C and −8°C (Figure 4). As the results presented, the temperature throughout the South American continent was the same as in January, showing no changes compared to the long-term average (Figure 4). The most extensive geographical range of above-zero temperatures was observed in February, which coincided with the quarantine measures and shutdown of factories worldwide. The results of comparing March 2020 temperature differences with the long-term average revealed that the temperature had undergone a rise in all continents, which was causing the meltdown of glaciers in northern regions of Russia (Willis et al., 2018). The temperature in Antarctica had dropped in comparison to the long-term average despite the initial temperature progress toward an increase at the beginning of this season (spring). Furthermore, the March 2020 temperature in Greenland has been lower. Yet, the temperature on the American continent was differently considering that the 8 Air, Soil and Water Research North American continent maintained significantly lower temperatures than the South American counterpart in comparison to the long-term average. The temperature difference in February was so significant that the differences ranging between 20°C and −20°C. These temperature differences are anticipated and could be considered in the future, taking into account the quarantine implementations worldwide. April 2020 temperature differences and the long-term average determined that the temperature in Antarctica and the North Pole had undergone a rise by approximately 10°C. The temperature has decreased in April compared to March of the same year in Eurasia, with a maximum temperature difference of 3°C. Similar results were observed in Oceania, where April 2020 temperatures decreased compared to the 30-year average (in spring). However, no considerable shift was obtained from the 30-year average in South America within the 4 months ( January, February, March, and April). Temperatures in North America have also been colder compared to the long-term average (up to −3°C), but these changes have been less in comparison to March. Recent studies on the long-term trend of land surface temperature determined that the new decade's (2009-2018) temperature is approximately 0.7°C warmer than the previous decade , while simultaneously, the temperature changes have extended above 30°C, the impacts of which are due to the increased production of factories and human activities on Earth (Cattiaux et al., 2010;Fischer et al., 2013). In a similar study conducted by Sippel et al. (2020), daily temperature data were employed to study climate change, using a variety of temperature forecasting models courtesy of the National Centers for Environmental Prediction (NCEP) and CMIP5 temperature forecasts for 2020 and the future alike. The ultimate results demonstrated that the temperature increased by approximately 1°C from 1950 to 2018, and this gradual increase over time was also anticipated by the discussed climate models, namely the National Centers for Environmental Prediction (NCEP) and CMIP5 models. In the present study, the change trends observed in the average winter and spring months' temperature denoted that this factor corresponded with the beginning of human activity decline (resulting from the COVID-19 outbreak and implemented quarantine measures) worldwide since the maximum decrease in temperature had become more severe since late winter. The temperature had been dropped compared to the 30-year average, and the Earth's surface temperature has similarly undergone a decrease. Moreover, a survey of the average temperatures of the previous years (2016-2018) during the winter and spring has designated the trend of increasing temperature. Consequently, the COVID-19 pandemic could be able to subdue many of the factors impacting the increase in temperature suddenly and temporarily. Other researchers further studied the average temperature in the United States during the 1980 to 2009 period to confirm that the temperature is lower in the spring. Yet, this factor leads to unregulated streams and increased human activities in the autumn, winter, and summer seasons to the extent that it has increased the spring Average Earth temperature The average monthly temperature of the Earth revealed significant changes in the study of Earth's temperature. According to this, the minimum and maximum temperature differences in the years before 2018 were quite severe, but from 2018 to 2020 this difference gap has decreased and the earth's surface temperature is progressing toward a rise ( Figure 5). Based on the information obtained from the World Meteorological Organization, the results revealed that the average temperature of the earth is currently increasing. The average temperature of the Earth has risen by approximately 0.5°C in 2018 compared to 2017. Moreover, the average temperature in 2020 has reached roughly 28.2°C, shown an increase of nearly 0.2°C compared to 2019 estimations. Consequently, it can be concluded that the average global temperature in 2020 could be decreased compared to the prior years ( Figure 5). The results show approximately 0.3ºC temperature decrease in the early2020. This decrease can grow up to 0.5°C if the worldwide lockdowns persist. Conclusions In this research, we compared the Earth's surface temperature using different term periods. The main aim was to detect if the current COVID-19 pandemic impacts, the reduction of human activities and forced quarantine, would have affected global warming values by reducing the mean temperature values. Our results showed that the largest difference in the short-term temperature in terms of comparison to 2020 referred to the months when the quarantine began, that is, February and March, and the temperature was cooler in comparison to the prior years. The long-term mean assessment highlighted that the temperatures throughout the South American continent remained consistent during the first part of 2020 in comparison to the 30-year average data, but temperatures in North America declined from February to April. Similarly, the temperatures in Europe and Asia in April were lower compared to the last 30 years average data in February and March. Also, the average temperature of the Earth dropped about 0.3°C compared to 2019. Based on the results, there was an approximately 0.2°C decrease in average temperature in early 2020. If the lockdown persists, this decrease can grow to about 0.4°C in late 2020 and continue over 2021. On a short term and long-term scale, temperature variations based on the COVID-19 expansion were more pronounced in North America, Europe, and Asia. In contrast, minimal temperature changes occurred respectively in Australia, Africa, and South America. Considering that future analysis during the coming years must be also conducted, we hypothesize that the impacts of COVID-19 pandemic on human activities could manifest different temperature trends over the world. These changes could be different considering diverse spatial scales (from regional to country scales), but observing these results, this unforeseen phenomena could represent a new factor to be considered for global warming and climate change studies during the coming years. Databases, and the Copernicus Data Management Support for their sincere aid in providing the necessary information for completing this project. Author Contributions S.S. conceived the study with P.N. conducted the statistical analysis. All authors contributed to the interpretation of the results and the writing of the manuscript. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2022-06-05T15:23:52.687Z
2022-01-01T00:00:00.000
{ "year": 2023, "sha1": "58b30e81d9b667d7b51d02286f812d61569568de", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/11786221221101901", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "e71942184f1eb15dd73ea3ebf737ade0a2b26e23", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
235368196
pes2o/s2orc
v3-fos-license
Defining definition: a Text mining Approach to Define Innovative Technological Fields One of the first task of an innovative project is delineating the scope of the project itself or of the product/service to be developed. A wrong scope definition can determine (in the worst case) project failure. A good scope definition become even more relevant in technological intensive innovation projects, nowadays characterized by a highly dynamic multidisciplinary, turbulent and uncertain environment. In these cases, the boundaries of the project are not easily detectable and it is difficult to decide what it is in-scope and out-of-scope. The present work proposes a tool for the scope delineation process, that automatically define an innovative technological field or a new technology. The tool is based on Text Mining algorithm that exploits Elsevier's Scopus abstracts in order to the extract relevant data to define a technological scope. The automatic definition tool is then applied on four case studies: Artificial Intelligence and Data Science. The results show how the tool can provide many crucial information in the definition process of a technological field. In particular for the target technological field (or technology), it provides the definition and other elements related to the target. Introduction According to Bryce [3] the project scope is used to define the business problem and the opportunity to be. The scope should be clear and has to remain the same for the whole project. As C. Cho define [4], a poor scope definition is recognized by industry practitioners as one of the leading causes of project failure, adversely affecting projects in the areas of cost, schedule, and operational characteristics. For these reasons a well-defined scope is fundamental in project and it becomes very important in innovation process. Innovative projects are characterized by an high degree of uncertainty. The risk linked to uncertainty (in terms of both probability and magnitude) become even more relevant in innovative projects in technological fields, characterized by a highly dynamic environment: multidisciplinarity, turbulence and uncertainty. [5] In these cases, the boundaries are not easily detectable and it's difficult to decide what it is in-scope and out-of-scope. The present work demonstrates that it is possible to define new technological fields or technologies using text mining tools, in order to support the innovators and researchers in scope definition process within innovative projects. To construct a definition of a target innovative tech field, text mining techniques are applied to Elsevier's Scopus abstracts for extract relevant information that is useful in the scope definition process of target technological field: 1. definitions: A definition is a statement of the meaning of the term; 2. hyponyms: A hyponym of a term x is a term y included in a semantic field of term x. 3. hypernyms: A hypernym of a term x is a term y that includes in a its semantic field the term x. The present work is structured as follow. In section 2 the relevant literature to understand our work is reported, in particular an explanation of what a definition is. In section 3 the developed methodology to develop a tool that aims to automatically define a tech field is presented. In section 4 the automatic definition tool is applied on two case studies: Artificial Intelligence and Data Science. Finally, in section 5 we discuss about the conclusion and the next steps of our work. Defining Definitions The purpose of a definition is to map the meaning of a term in order to provide the user with an understanding on what the term is about. John, L. (1977) [6] classifies the definitions in two large categories: intensional definitions and extensional definitions. Roy T. Cook (2009) explains the differences between these two types of definition [7] as: • "An intensional definition gives the meaning of a term by specifying all the properties required to come to that definition, that is, the necessary and sufficient conditions for belonging to the set being defined". • "An extensional definition defines by listing everything that falls under that definition". In new technological fields, both categories of definitions are required to understand completely the field, but this work is focused on intensional definition, because this definition form can help to develop a set of rules for mining definitions from texts. Also, a list of elements under the meaning of technological field is provided, for attempt to give an extensional adds the features to y to differentiates x from other terms. Genus differentia model is useful to define the relation and properties among terms in a mathematical way. It is important to provide the users with the standard delineation of these terms and not to confuse between semantic world. The delineation of some semantic relation between the terms is fundamental to understand the work presented in this paper, in particular the definition of semantic field, hypernymy, hyponymy, synonymy is constructed starting from Genus Differentia definition. Green R. (2013) tries to define a subset of these terms in a mathematical way, but in a different manner from the one presented [9]. For the scope of this work, we prefer the forms described below, because they are based on Genus differentia definition; we will stick to one school of thought in order to provide the same point of view throughout the whole paper. Definition 1 -A semantic field is a set of words grouped semantically. For example, the semantic field of word organ is a set of word {heart; liver; small; …}. Field semantic of term y, called is: Hypernymy is a relationship that relates two terms x and y, in which a term y, called hypernym, includes in its own semantic field other terms x, called hyponyms, that have a semantic field smaller than y. Thus: In other words, a hypernym is a term that indicate a lexical unit of meaning more generic and extensive than one or more lexical units, that are included in the semantic field of the hypernym. For example, reptile is hypernym of lizard. The contrary of hypernym is hyponym. Definition 3 -Hyponymy is a relationship that relates two terms x and y, in which a term x, called hyponym, is included in semantic field of other term y, called hypernym, that has a semantic field which is more extensive than x. Thus: x is hyponym of y ⟺ ∈ Consequently ⊂ ⋀ # < # . For example, sunflower is a hyponym of flower. Definition 4 -Synonymy is a relationship that relate two terms x and y, where ≠ , but they have the same definiens. Thus: x is Synonymous of y and vice versa ⟺ { = ℎ + = ℎ + Consequently ∈ ℎ ⋀ ∈ ℎ . For example, flask is a Synonymous of balloon. Text mining techniques Text Mining a field of research which helps in getting relevant information from unstructured textual data. It is an interdisciplinary field which draws on information retrieval, data mining, machine learning, statistics and computational linguistics. Since most information, over 80%, is stored as text, text mining is believed to have a high commercial potential value [10]. The problem introduced by text mining is obvious: natural language was developed for humans to communicate with one another and to record information, and computers are a long way from comprehending natural language. Most advanced text mining software use sophisticated Natural Language Processing (NLP) algorithms. Natural language processing (or NLP) is a component of text mining that performs a special kind of linguistic analysis that essentially helps a machine "read" text [11]. In the present work, the most important NLP tools is universal POS tagging: it marks the core part-of-speech categories and to distinguish additional lexical and grammatical properties of words, use the universal features. The used systems was developed in CoNLL, that stands for Conference on Natural Language Learning and is the SIGNLL's (Special Interest Group on Natural Language Learning) yearly meeting [12]. In particular we used the R package udpipe [13], that uses a revised version of the CoNLL-X format called CoNLL-U [14]. The most relevant aspect of CoNLL-U annotation, Text chunking is borrowed from Natural Language Processing. The activity allows to divide sentences into nonoverlapping segments [15] and is done in unsupervised mode by directly dividing sentences into phrases using linguistics or statistics [16]. The second relevant techniques used in this article is Named-entity recognition (NER) , that is a subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations These Text Mining techniques has been used in this work to manipulate the Scopus abstracts in order to extract the relevant data from it, such as definitions, hypernyms and hyponyms. We used only the abstracts rather than the entire body of each article because having access to all the articles on a given topic is often difficult, while the abstracts can be sources for massive information analysis with very low cost. Methodology The methodology to build a tool that aims to automatically define a technological field has been divided in three phases: 1. Rules construction: in this phase a set of rules has been designed to extract from abstracts of scientific papers the definitions of a technological field (or other relevant elements that can be useful in definition process). The rules have been implemented in the form of regular expression. A regular expression is a sequence of strings, used to identify text data that follow the regularity sought [18]. All analytic process has been developed on software Rstudio. The proposed process is shown in figure 1. The following section describes in depth each task of presented methodology. Construct top-down rules To construct the rules for extract definitions, hypernyms and hyponyms, a hybrid theoretical- (such as [6], [7]) and from [20], a complete list is provided joined with the exploration of related documents. Thus, the list of sources establishes the theoretical base for the definition process and to perform the construct top-down rules task. List random technologies The bottom-up rules have been formulated by analyzing more than 600 definitions of technologies extracted from Wikipedia. Starting from the Wikipedia page of List of emerging technologies [19], we extract the hyperlinks to list random technologies. The repetition of this process with the extracted technologies enlarges the list of words. Though, since consistency of analysis is crucial in bottom-up processes, the starting technologies list must be large enough; we aim to an extraction of more than 600 definitions. If the number were to fall below 600, we would require the enlargement of the technology list. To mine definitions on Wikipedia, a regular expression is built using the wished technology term. This task is based upon the hypothesis that the Wikipedia page of a term starts with the definition of the same term. Analyze definitions In this step, extracted definitions of listed technologies are screened. The dataset of definitions of each term belonging to list of technologies has been analysed to build a set of bottom-up rules. In particular, the observed element in each definition have been: (i) definiendum; (ii) definitor; (iii) genus, (iv) words between definiendum and definitor. Finally, this task aims to obtain a list of empirical rules in order to extract definitions, hyponyms, hypernyms and synonyms from text, in specific instance we will try to mine these relevant data from Scopus abstracts. The rules have been then classified into families, based on information extracted from the texts, and in classes, grouping the rules of each family according to multiple criteria. Table 1 shows a part of the 36 rules identified in Rules construction phase. All established rules are available in [21]. The families of the rules are: -Definition: rules with the function of extracting case study definitions; -Hyponym: rules with the function of extracting hyponyms and hypernyms of the observations. The possible classes, according to which a rule can be classified, are: - List relevant observations In order to test and evaluate the constructed rules, the relevant observations must be listed. An Evaluate rule performances For each rule we apply the prototype with the aim of extracting definitions and hyponymies from all the observed Scopus' abstracts. The results have been manually analysed to understand if rules could be useful for the tool construction. Rules belonging to the same class have been evaluated and ranked to decide which of them could represent the basis for the tool (output of the present work). The ranking took into consideration the following aspects: • Number of relevant results identified by the prototype • Use only definitor {"is"}; • Use only definitor {"refers to"}; • Use both of them {"refers to","is"}. Column "number of relevant results" contains the mean number of results obtained thanks to the rule's application on total observations. Columns "precision" indicates the mean of all observations' own precision, weighted on extracted sentences, with the aim of reducing importance of observations having high precision but few extracted sentences at the same time. Analyse genera In the next sections the analysis of case studies is faced, as we shown in figure 1. genus has been mining with the constructed rules described in section 3.5 and the frequency distribution of genera will be analysed. To perform this process, a text mining tools are used to capture the genera, in fact the constructed rules are based on theory of chunking, that ensure to mine all words that compose a genus in a definition and not a part of this. The purpose of this task is identified the genera to insert in our definitions of the observed tech fields. Constructed ontology The scope of this process is to create an ontology of the observed tech field using the hypernyms and hyponyms extracted from Scopus abstracts. Also in this case, the extraction of these elements is possible thanks to the constructed rules in section 3.5. Finally, for Artificial Intelligence and Data Science, a definition will be building using the genera, distinctive features analysis and the ontology. The two case studies will be compared among them. Results The following section describes the performance of automatic definition process on two different case studies: Artificial Intelligence and Data Science. Each tech field is analysed in each section of this part. Finally, in section 4.3 the comparison of all technological fields will be shown. Case studies results: Artificial Intelligence The extracted definitions of Artificial Intelligence are 107. An example of these is shown in we conduct a holistic, systematic literature review using artificial intelligence technologies such as information retrieval, text mining and supervised learning, side-byside with manually reading of many relevant articles. supervised learning technology Table 4 -An example of extracted hyponyms and hypernyms of Artificial Intelligence from Scopus abstracts. In column "Scopus ID" the identification code of the article is shown, referred to the abstract in which hypernyms and hyponyms are contained. In column "Sentence", the sentence is reported, in which hyponyms and hypernyms are contained. In column "Hyponym", the hyponym extracted from sentence is shown. In column "Hypernym", the hypernym of hyponym extracted from sentence is reported. In figure 2 To verify the robustness of the analysis some aspects have been analysed and plotted. Figure 2a communicates which genera are more used in Artificial Intelligence definitions, e.g. the chunk "branch of computer science" has an occurrence probability of 15%. Case studies results: Data Science The extracted definitions of Data Science are 26. An example of these has shown in table 5. Otherwise, the hypernyms and hyponym extracted with the tool are 27 and a part of these is shown in In figure 3 the analysis of the Data Science definitions-set is shown. Def. Data science is an interdisciplinary field with purpose to extract knowledge from data. E.g. Some technologies used in Data science are: Data mining, Big data, Cloud computing. Case studies comparison In this section a comparison between two analyzed case studies has been presented, in terms of convergence in the delineation process of tech field from scientific community. To compare the cases the network analysis has been performed. Each definition of Artificial Intelligence and Data Science has been represented as a vector based on the words found in the definition. To construct the network analysis the definitions have been represented in a graph, where a node is a definition of Artificial Intelligence or Data Science and an arc between two definitions represents the link between both definitions. The strength of link dependens on the number of words in common to both definitions. The hypothesis is: if a technological field is old, then its definitions in literature tend to converge to the same meaning. On the other hand, a new technological field will be fuzzily defined, because it has not reached a level of maturity in order to have a commonly agreed definition. The hypothesis has been validated with some case studies in [21]. In our case the analysis of Artificial intelligence compared to Data science has been performed. Artificial intelligence first emerged in the 80' and then evolved at the beginning of the 21° century with neural networks. This has resulted in a lesser cohesion of the definitions as can be seen in the network diagram ( Figure 4) where there are various clusters. On the other, data science tends to be more cohesive in terms of definitions even though it has a new paradigm. Conclusion In the present paper we demonstrated that it is possible to main definition of innovative technologies or technological fields from the abstracts of scientific papers. We developed a methodology to solve this problem, and successfully applied the methodology to two case studies (Artificial Intelligence and Data Science). The results have been compared between the two case studies, showing also how our tool is able to identify fuzzy-defined technological fields. A first next step is to understand if it is possible to extract entities that are similar to technologies such as methods or algorithms, using the presented methodology. The tool can be slightly modified in order to extract these other entities. Furthermore, other possible rules could be implemented in Automatic definition tool to enhance its precision and recall. Other sources can be mined for definition (e.g. patents or twitter) to enhance the ability of the tool to have a broad vision of a technological field. Finally, the tool has been designed to help the innovators and researchers in scope definition process, thus we want to evaluate in an field experiment if it is true, to assess the efficacy of the developed tool.
2021-06-09T01:16:16.391Z
2021-06-08T00:00:00.000
{ "year": 2021, "sha1": "b7c6f00c46491af5b9b5fbcc89610f2903d0f960", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b7c6f00c46491af5b9b5fbcc89610f2903d0f960", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
233455874
pes2o/s2orc
v3-fos-license
Rap1GAP Mediates Angiotensin II-Induced Cardiomyocyte Hypertrophy by Inhibiting Autophagy and Increasing Oxidative Stress Abnormal autophagy and oxidative stress contribute to angiotensin II- (Ang II-) induced cardiac hypertrophy and heart failure. We previously showed that Ang II increased Rap1GAP gene expression in cardiomyocytes associated with hypertrophy and autophagy disorders. Using real-time PCR and Western blot, we found that Rap1GAP expression was increased in the heart of Sprague Dawley (SD) rats infused by Ang II compared with saline infusion and in Ang II vs. vehicle-treated rat neonatal cardiomyocytes. Overexpression of Rap1GAP in cultured cardiomyocytes exacerbated Ang II-induced cardiomyocyte hypertrophy, reactive oxygen species (ROS) generation, and cell apoptosis and inhibited autophagy. The increased oxidative stress caused by Rap1GAP overexpression was inhibited by the treatment of autophagy agonists. Knockdown of Rap1GAP by siRNA markedly attenuated Ang II-induced cardiomyocyte hypertrophy and oxidative stress and enhanced autophagy. The AMPK/AKT/mTOR signaling pathway was inhibited by overexpression of Rap1GAP and activated by the knockdown of Rap1GAP. These results show that Rap1GAP-mediated pathway might be a new mechanism of Ang II-induced cardiomyocyte hypertrophy, which could be a potential target for the future treatment of cardiac hypertrophy and heart failure. Introduction Cardiac hypertrophy is an adaptive response of the heart to various pathological stimuli, including pressure overload, myocardial infarction and ischemia, and hypoxia [1]. The heart is able to maintain normal functions in the hypertrophic compensatory stage by changing its structure and metabolism, but this compensatory mechanism causes an increased oxygen consumption, which eventually leads to ventricular dilatation and heart failure (HF) [2]. In clinical practice, cardiac hypertrophy is the main cause of cardiomyocyte death, decreased myocardial contractility, and electrophysiological disorders [3]. It is well established that renin-angiotensin system (RAS) plays a critical role in cardiac hypertrophy and heart failure. The main effector of RAS is angiotensin II (Ang II) [4], which results in ventricular remodeling by mechanisms including the regulating of cardiac autophagy and oxidative stress. A growing body of evidence shows that the pathological process of cardiac hypertrophy is associated with excessive autophagy and reactive oxygen species (ROS), which eventually leads to cardiomyocyte necrosis and apoptosis [5,6]. Autophagy is the process of using lysosomes to degrade the damaged organelles and macromolecules, which is essential for normal cell homeostasis. Cardiac autophagy plays an important role in maintaining cell activity and heart function under stresses. Autophagy-mediated clearance of damaged organelles reduces inflammasome activation, thus mitigating cardiomyocyte dysfunction and coronary microvascular injury [7]. The regulation and the related mechanisms of cardiac autophagy are unclear yet. It has been reported that ROS production in the heart might be involved in the autophagy regulation [8,9]. Cardiac hypertrophy leads to increased oxygen consumption, and excessive ROS is produced in mitochondria, resulting in irreversible damage of mitochondrial DNA and further induces cardiac remodeling and failure [8,9]. In the meanwhile, ROS regulates autophagy through various mechanisms involving catalase, Atg4, mitochondrial electron transport chains, and the Ca 2+ release channel on the lysosomal membrane [10,11]. The Ras superfamily protein is a kind of small molecule GTP-binding protein prevalent in eukaryotes, which is involved in many processes of cell activities, including cell proliferation and differentiation, membrane trafficking, cytoskeleton regulation, and intracellular oxidase formation. It has nine subfamilies including Ras and Rab subfamilies. The Rap protein acts as a molecular switch in the regulation of multiple signaling pathways [12]. It has five subtypes: Rap1a, Rap1b, Rap2c, Rap2b, and Rap2c. Among them, the most abundant subtypes in the heart are Rap1a and Rap1b. Studies have reported that Rap1 is associated with mitochondrial ROS production in the heart [13], while the inhibition of oxidative stress is considered a promising therapeutic strategy for pathological cardiac hypertrophy and heart failure. However, there are no studies regarding whether Rap1 is involved in the progression of cardiac remodeling. Rap1 GTPase-activating protein (Rap1GAP) converts active GTP-bound Rap1 to inactive GDP-bound state [14]. Our previous studies found that Rap1GAP is expressed in rat cardiomyocytes and is upregulated in Ang II-induced cardiomyocyte hypertrophy [15]. The present study using techniques of gene silence and overexpression further demonstrated that Rap1GAP plays a critical role in mediating Ang II-induced cardiomyocyte hypertrophy through its regulation on autophagy and oxidative stress. Isolation of Neonatal Rat Cardiomyocytes and Cell Treatments. All animal procedures were approved by the Animal Care and Use Committee of Shandong University. Neonatal rat cardiomyocytes (NRCMs) were isolated enzymatically with collagenase II (Sigma-Aldrich, St. Louis, MO, USA) from 1-to 2-day-old Wistar rats. Briefly, the hearts of neonatal rats were cut into 1 mm 3 pieces and digested with type 2 collagenase at 37°C for 5 min. The cardiomyocytes were counted and seeded in a 6-well culture plate after digestion, centrifugation, and purification. NRCMs were then cultured in Dulbecco's modified Eagle's medium (DMEM, Gibco, USA) supplemented with 8% horse serum (Gibco, USA), 5% fetal bovine serum (Gibco, USA), 1% penicillin/streptomycin (Hyclone, USA) and 0.1 mmol/l bromodeoxyuridine (Sigma-Aldrich, USA) for 72 h. Brdu was used to inhibit the proliferation of fibroblast. Animal Study. All procedures were in compliance with the Guide for the Care and Use of Laboratory Animals and were approved by the Animal Care and Use Committee of Shandong University. Sprague Dawley (SD) adult rats at 8~10 weeks old were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. Animals were fed with chow diet and maintained under a facility with a 12hour light/dark cycle and constant temperature and humidity conditions. The animal model of cardiac hypertrophy was induced by subcutaneously injecting with Ang II (10 mg/kg) per day for 2 weeks [16][17][18], while rats were injected with an equal volume of saline solution as a control. Rats were individually euthanized and sacrificed with pentobarbital (150 mg/kg body weight, i.p.), and the left ventricles of the hearts were collected for further experiments. 2.3. Immunofluorescence Staining. NRCMs were plated on four-well glass chamber slides (Labtek, Germany) coated with 0.5% Gelatin (Gel) at a cell density of 60%. After 72 h, cells were washed in PBS for three times and then fixed in immunostaining fixture solution (Beyotime Biotechnology, Shanghai, China) for 10 minutes at room temperature, and nonspecific binding was blocked with goat serum blocking solution for 1 hour at room temperature. Cells were incubated with the Rap1GAP primary antibody (ab32373, Abcam, USA) or α-actinin (69758S, CST, USA) at 1 : 200 dilutions in goat serum blocking solution overnight at 4°C. The slides were washed three times with PBS and then incubated with Alexa Fluor 488 conjugated goat-anti-rabbit antibody (Invitrogen, USA) or Alexa Fluor 594 conjugated goat-anti-mouse antibody (Invitrogen, USA) for 30 minutes at room temperature. After washing three times with PBS, DAPI nuclei staining was performed. Cells were analyzed by a fluorescence microscope (Olympus Corporation, Tokyo, Japan), and the results of signals were quantified by the software of ImageJ. 2.10. Reactive Oxygen Species (ROS) Measurement. Intracellular ROS production in cardiomyocytes was assessed by the fluorescence intensity of dihydroethidium (DHE, Beyotime Biotechnology) staining. After the cardiomyocytes were subjected to their respective treatments, they were incubated with 10 μM DHE at 37°C for 30 min. The cells were washed with PBS for 3 times, and then, the adherent cells were immediately observed under a fluorescence microscope (Olympus Corporation, Tokyo, Japan). ImageJ software was used to quantify the fluorescence intensity of each picture. Hematoxylin and Eosin (HE) Staining. The tissues of the left ventricle were fixed in 4% formalin and embedded in paraffin and then cut into 5 μm serial sections. Tissue sections were floated onto a warm water bath from where they were placed on slides. After deparaffinizing in xylene and rehydrating in graded ethanol solutions, the sections were stained with eosin solution. Sections were then examined using a microscope (Olympus Corporation, Tokyo, Japan), and the cell surface areas were measured using Image-Pro Plus 6.0 software. A random collection of 10 cardiomyocyte images is calculated, which contains at least 25 cells from the crosssectional area of cardiomyocytes. 2.12. Echocardiography. Rats were lightly anaesthetized with 1.5%-2% isoflurane via inhalation. Anaesthetized rats were subjected to a transthoracic echocardiography, using a Vevo 770 ultrasound with a 25 MHz transducer (Visual Sonics, Oxidative Medicine and Cellular Longevity DAPI-positive nuclei. Image-Pro Plus software (Image Solutions, Torrance, CA, USA) was used to count the cells and calculate the average value. Apoptotic cell number was counted in at least five randomized microscope fields in each of three independent samples under a fluorescence microscope. 2.14. Flow Cytometry for Cell Apoptosis Assay. The Annexin V-PE/7AAD apoptosis detection kit (KeyGEN BioTECH Co., Nanjing, China) was used to examine the cell apoptosis in different groups. Briefly, NRCMs were incubated in sixwell plates at 6 × 10 5 cells/well. After different treatments, NRCMs were collected and resuspended in binding buffer, flowed by incubation with Annexin V-phycoerythrin (PE) and 7-aminoactinomycin D (7AAD) at room temperature for 15 min. C6 Flow Cytometer™ system (BD Biosciences, CA, USA) was used to analyze the apoptotic rate of cells. 2.15. Statistical Analysis. Statistical analysis was performed using GraphPad Version 8.0 (GraphPad Software, La Jolla, CA, USA). All data were expressed as the mean ± SEM. Student's t-tests were performed to compare means between two groups. ANOVA followed by Bonferroni's multiple com-parisons was used to compare means from three or more groups. A level of P < 0:05 was considered to be statistically significant. All experiments were performed independently three times. Ang II Induced Cardiomyocyte Hypertrophy, Oxidative Stress, and Apoptosis and Inhibited Autophagy. Compared with vehicle-treated cells, Ang II significantly increased cell surface area (P < 0:01; Figure 1(a)) and the mRNA expression of ANF and BNP (P < 0:05; Figure 1(b)), suggesting that Ang II induced hypertrophy in the NRCMs. To determine the effect of Ang II on autophagy, we measured the markers of autophagy, LC3BII/I and p62 expressions, in the absence or presence of autophagy inhibitor 3-Methyladenine (3-MA). Compared with vehicletreated cells, 3-MA reduced the ratio of LC3BII/I and increased the content of p62 (P < 0:05 and P < 0:01, respectively; Figure 1(c)). Consistent with the results of 3-MA, Ang II also inhibited autophagy (P < 0:01; Figure 1(c)). Subsequently, we measured the markers of autophagy at series Oxidative Medicine and Cellular Longevity time points of Ang II stimulation. Compared with 0 h control, the ratio of LC3BII/I expression reached the lowest level at 24 hours and then gradually increased but still lower at 48 hours than that at 0 h (P < 0:01; Figure 1(d)). The level of p62 significantly increased after Ang II treatment and reached the highest level at 24 hours (P < 0:01; Figure 1(d)). The changes of cardiomyocyte autophagy by Ang II were further confirmed by the experiment of mRFP-GFP-LC3 adenovirus infection in cardiomyocytes. As shown in Figure 1(e), the autophagic flux after Ang II or 3-MA treatment was decreased by reducing the autophagosome conversion to autophagolysosome, as indicated by increase of yellow dots (indicating autophagosome) and decrease of free red dots (indicating autophagolysosome). Compared to those treated with vehicle, autophagosomes and autophagic lysosomes scanned by TEM were significantly decreased (indicated by the red arrow) (P < 0:01; Figure 1(f)), and mitochondrial disruption was observed in the yellow box in cardiomyocytes treated with Ang II. All these data suggest that autophagy was markedly decreased by Ang II treatment. To assess the mechanism of Ang II-induced cardiomyocytes hypertrophy, we analyzed the intracellular ROS and cell apoptosis in cardiomyocytes treated with Ang II. ROS production measured by dihydroethidium (DHE) staining was markedly increased in Ang II-treated NRCMs in comparison to vehicle-treated cells (P < 0:01; Figure 1(g)). TUNEL staining showed that Ang II increased cardiomyocyte apoptosis (Figure 1(h)). Rap1GAP Is Upregulated in Ang II-Induced Cardiomyocyte Hypertrophy. Consistent with our previous results from tandem mass tag (TMT) protein mass spectrometry and bioinformatics analysis [15], Rap1GAP mRNA levels were significantly increased in Ang II-induced hypertrophic cardiomyocytes (P < 0:05; Figure 2(a)). Western blot analysis further demonstrated that Rap1GAP protein expression was also increased by Ang II at different time points. As shown in Figure 2(b), the peak increase of Rap1GAP protein expression was at 24 h of Ang II treatment, which was gradually decreased but still remained higher at 48 h than that at 0 h (P < 0:01). In addition, we also found that the protein expression of Rap1GAP was increased in phenylephrinetreated cardiomyocytes (P < 0:01; Figure 2(c)). Immunofluorescence staining showed that Rap1GAP was expressed in both the cytoplasm and nuclei, but mainly in the nucleus. Compared with the vehicle-treated cells, the 10 Oxidative Medicine and Cellular Longevity level of Rap1GAP in NRCMs treated with Ang II was markedly elevated (P < 0:01; Figure 2(d)). Cardiac Rap1GAP Is Increased in Sprague Dawley Rats Infused by Ang II. To further investigate if our findings from in vitro studies are consistent with in vivo study, animal model of cardiac hypertrophy was induced by chronic treatment of Ang II for 2 weeks [16][17][18]. Echocardiographic results showed that the left ventricular end-diastolic diameter (LVEDD) and left ventricular end-systolic dimension (LVESD) were remarkably decreased (P < 0:01; Figures 3(a) and 3(b)), while the ejection fraction (EF) and fractional shortening (FS) were increased in Ang II-treated rats (P < 0:01; Figures 3(a) and 3(b)). Ang II induced cardiac hypertrophy, as reflected by the increased ratios of heart weight (HW)/body weight (BW) and HW/tibia length (TL) (P < 0:01; Figure 3(c)). The Ang II-induced cardiac hypertrophy was also confirmed by the hematoxylin and eosin (HE) staining in left ventricles, indicating that the cardiomyocytes in Ang II-treated rats were significantly larger than those in saline-treated rats (P < 0:01; Figure 3(d)). We measured the cardiac Rap1GAP protein and mRNA levels in the cardiac hypertrophy model compared to the control rats. The results showed that both protein and mRNA levels of Rap1GAP in the heart were higher in Ang II versus saline-treated rats (P < 0:01; Figures 3(e) and 3(f)). Rap1GAP Knockdown Increases Autophagy and Attenuates Oxidative Stress in Ang II-Treated Cardiomyocytes. To further determine the functional role of Rap1GAP in Ang II-induced cardiomyocyte hypertrophy, neonatal rat cardiomyocytes were transfected with small interfering RNA against Rap1GAP (si-Rap1GAP) or scrambled control (si-control), followed by Ang II stimulation for 24 hours. RT-PCR indicated that compared with the si-control group, the Rap1GAP mRNA was significantly reduced by approximately 60% after si-Rap1GAP transfection (P < 0:05, Figure 4(a)). Consistently, Rap1GAP siRNA reduced the protein expression of Rap1GAP compared to control siRNA (P < 0:01; Figure 4(b)). Moreover, knockdown of Rap1GAP dramatically decreased Ang II-induced expression of ANF and BNP compared with the si-control (P < 0:05 or 0.01; Figure 4(a)). Meanwhile, immunostaining of NRCMs for αactinin showed that knockdown of Rap1GAP markedly attenuated the increase in cardiomyocyte hypertrophy induced by Ang II (P < 0:01; Figure 4(c)). 12 Oxidative Medicine and Cellular Longevity Next, we evaluated the effects of si-Rap1GAP on autophagy in cardiomyocytes. As shown in Figure 4(b) in both vehicle-and Ang II-treated cells, compared with si-control, the expression of LC3BII/I was increased and the level of p62 was decreased after Rap1GAP knockdown (P < 0:05 or 0.01). In the experiment of mRFP-GFP-LC3 adenovirus infection, compared with the si-control group, Rap1GAP knockdown increased the red dots and decreased the merged yellow spots, and the binding of autophagosomes to lysosomes was blocked. The data indicated that Rap1GAP deficiency enhanced the Ang II-induced autophagy flux (Figure 4(d)). TEM results also revealed that si-Rap1GAP mitigated Ang II-induced reduction of autophagosomes and autolysosomes in cardiomyocytes and improved Ang II-induced mitochondrial fractures (P < 0:01; Figure 4(e)). To further study the mechanisms and the involved signaling pathways by which Rap1GAP regulates autophagy, we examined the activity of the AMPK/AKT/mTOR pathway which was the classical pathway of autophagy regulation. The results showed that Rap1GAP knockdown significantly decreased the expression of p-AKT and p-mTOR (P < 0:05 and P < 0:01, respectively) and increased the expression of p-AMPK and p-p70s6k (P < 0:05 and P < 0:01, respectively; Figure 5(a)). We also examined the effects of Rap1GAP knockdown on oxidative stress and apoptosis in Ang II-induced hypertrophic cardiomyocytes. Compared with control siRNA, Rap1GAP knockdown reduced the ROS generation in both vehicleand Ang II-treated cardiomyocytes (P < 0:01; Figure 5(b)). The ratio of cleaved-caspase-3/caspase-3 was reduced in Rap1GAP-deficient cardiomyocytes compared to Rap1GAP intact cells (P < 0:01; Figure 4 Rap1GAP Overexpression Reduced Autophagy and Increased Oxidative Stress in Ang II-Treated Cardiomyocytes. Figure 6(a) shows the structure of the Rap1-GAP plasmid construct. The gene transduction rate of the viral vector measured by flow cytometry analysis was 85.4%, which was consistent with the result of GFP immunofluorescence image ( Figure S1). Compared to control vector (Ad-GFP), the cardiomyocytes infected with adenovirus vector overexpressing Rap1GAP (Ad-Rap1GAP) increased the content of Rap1GAP (P < 0:01; Figure 6(b)) and enhanced the expression of ANF and BNP (P < 0:05; Figure 6(b)) after Ang II treatment. Moreover, Rap1GAP overexpression dramatically increased the hypertrophic growth of cardiomyocytes in response to Ang II (P < 0:01; Figure 6(c)). Rap1GAP overexpression dramatically decreased the level of autophagy in Ang II-treated cardiomyocytes, demonstrated by the decreased ratio of LC3BII/I and the increased p62 (P < 0:05 or P < 0:01; Figure 6(d)). Consistently, TEM results showed that overexpression of Rap1GAP aggravated Ang II-induced decrease of autophagosomes and increase of mitochondrial fractures compared with the Ad-GFP group (P < 0:01; Figure 6(e)). In contrast to Rap1GAP knockdown, Rap1GAP overexpression increased the expression of p-AKT and p-mTOR 14 Oxidative Medicine and Cellular Longevity (P < 0:01 and P < 0:05, respectively) and reduced the expression levels of p-AMPK and p-p70s6k compared with Ad-GFP control (P < 0:05; Figure 7(a)). Moreover, the overexpression of Rap1GAP increased oxidative stress and promoted apoptosis in cardiomyocytes. DHE staining showed that Ad-Rap1GAP significantly increased the generation of ROS compared to Ad-GFP control (P < 0:05 or 0.01; Figure 7(b)). The overexpression of Rap1GAP led to an increase in the ratio of cleaved-caspase-3/caspase-3 (P < 0:05; Figure 6(d)). As shown in Figure 7(c) and Figure S2, the apoptotic rate of NRCMs infected with Ad-Rap1GAP was significantly increased compared to that of Ad-GFP control (P < 0:01 or P < 0:05). The Effects of Rap1GAP Overexpression/Knockout on Oxidative Stress Are Reversed by Autophagy Inducer/Inhibitor. To confirm the relationship between autophagy and oxidative stress, we treated NRCMs with autophagy inhibitor 3-Methyladenine (3-MA) and autophagy agonist rapamycin (RAPA) for 1 hour before the treatment with Ang II. Cells were harvested after exposure to Ang II for 24 hours. Compared with the Ad-Rap1GAP +Ang II group, the level of LC3BII/I was significantly increased and p62 was suppressed in RAPA-treated cardiomyocytes (P < 0:05 or P < 0:01; Figure 8(a)). Moreover, it attenuated the increase of ROS caused by Rap1GAP overexpression (P < 0:01; Figure 8(b)). In contrast, compared with the si-Rap1GAP+Ang II group, the content of LC3II/I was decreased and p62 was increased in 3-MA-treated NRCMs (P < 0:01; Figure 8(c)). Similarly, the ROS reduction caused by Rap1GAP knockdown was increased in the 3-MAtreated group (P < 0:05 or P < 0:01; Figure 8(d)). Discussion To our knowledge, this is the first study demonstrating that Rap1GAP is a critical mediator in Ang II-induced cardiomyocyte hypertrophy. Rap1GAP is a member of the Ras superfamily. The Ras family has been intensively studied in the field of cancer research, but rarely in cardiovascular diseases. It has been reported that the knockout of H-Ras, another member of the Ras subfamily, can prevent Ang IIinduced arterial hypertension and ventricular remodeling [19,20]. Ramos-Kuri et al. found that the Ras mutant Ras-Val12 or the dominant negative mutation N17-DN-Ras induced cardiac hypertrophy and produced cardiotoxicity [21]. As a member of RAS superfamily, Rap1GAP is an important tumor suppressor in tumor tissues, which inhibits cell proliferation, migration, and angiogenesis by attenuating the level of adhesion proteins that regulate cancer cell invasion, thereby increasing apoptosis and exerting tumor suppressive effects [22,23]. In the present study, we found that Rap1GAP was expressed in cardiomyocytes and elevated in Ang II-induced hypertrophic cardiomyocytes. Moreover, Rap1GAP was also increased in the heart of Ang II-treated rats compared to control rats. Knockdown of Rap1GAP attenuated while overexpression of Rap1GAP accelerated Ang II-induced oxidative stress, apoptosis, and hypertrophy in cardiomyocytes. Meanwhile, Rap1GAP was closely related to cell autophagy. These results demonstrate a new role of Rap1GAP in the heart, which is the involvement of cardiac hypertrophy and remodeling. The hypoxic environment is caused by increased cardiac oxygen consumption during the compensatory stage of cardiac hypertrophy. However, the disorders of oxygen metabolism eventually lead to heart failure. We hypothesized that Rap1GAP mediates the conversion of Rap1 from an active form to an inactive form, resulting in increased hypoxia in cardiomyocytes, and the increased ROS production aggravates myocardial damage. Yang et al. showed that activated Rap1 acts as a negative regulator of mitochondrial ROS production in the heart. The active form of Rap1 (Rap1GTP) was reduced by selective inhibition of Epac2 in adult rat ventricular myocytes [13]. Studies on other cell types showed that Rap1GAP increases ROS production by activating NADPH oxidase in retinal pigment epithelial cells and reduces [24]. Our data in cardiomyocytes are consistent with their results, demonstrating that ROS was significantly increased by Rap1GAP upregulation in hypertrophic cardiomyocytes, while it was inhibited by Rap1GAP knockdown. Our work showed that Rap1GAP mediates cardiac remodeling by increasing ROS production. Recent studies showed that autophagy plays a crucial role in Ang II-induced cardiac hypertrophy. Our data indicate that Rap1GAP inhibits autophagy in cardiomyocytes, resulting in autophagosome formation and mitochondrial damage. Uncontrolled autophagic disorders and mitochondrial disruption block the energy metabolism of cardiomyocytes and ultimately lead to cardiomyocyte death [25,26]. Studies showed that autophagy also reduces oxidative stress damage by phagocytosis and degradation of oxidative derivatives [10,27]. Here, we demonstrate the relationship between Rap1GAP-mediated autophagy and oxidative stress by using the inhibitors and inducers of autophagy. 3-Methyladenine exerts an effect of suppressing autophagy by inhibiting the autophagosome formation, while rapamycin is a macrolide immunosuppressant that specifically inhibits mTOR activation and activates autophagy by reducing phosphorylated mTOR. We have shown that oxidative stress was increased by blocking increased autophagy associated with Rap1GAP knockdown. In contrast, Rap1GAP overexpression-induced increase of ROS was attenuated by the activation of autophagy. Apoptosis plays an important role in the pathological process of cardiac hypertrophy to heart failure. The increased cardiomyocyte apoptosis has been reported in Ang IIinduced cardiac hypertrophy. In the compensatory stage of cardiac hypertrophy, apoptosis causes a progressive decrease in the cardiac functional contractile unit, while viable cardiomyocytes are adaptive hypertrophic accompanied by myocardial fibrosis. Numerous studies have found that both autophagy and oxidative stress induce apoptosis. Here, we demonstrate that Rap1GAP induces cardiomyocyte apoptosis which might be a main mechanism of Rap1GAP in cardiovascular diseases. Numerous signaling molecules are involved in the autophagy regulation. AMP-activated protein kinase (AMPK) is a positive regulator of autophagy and plays a key role in maintaining energy balance. However, the abnormal AMPK also participates in the pathogenesis of cardiac hypertrophy, inflammatory response, and myocardial fibrosis by affecting cellular metabolisms. Mitochondrial ROS is a physiological activator of AMPK, and AMPK can alleviate the impaired redox balance caused by reactive oxygen stress by enhancing the bioavailability of oxides [28]. Findings from the present study demonstrated that Rap1GAP functions in Figure 9: Hypothesized mechanisms of the role of Rap1GAP in Ang II-induced cardiomyocyte hypertrophy. Rap1GAP inhibits autophagy by suppressing the AMPK/AKT/mTOR signaling pathway and increases ROS production in Ang II-induced hypertrophic cardiomyocytes. Inhibition of autophagy reduces ROS clearance and further aggravates cardiac injury. 18 Oxidative Medicine and Cellular Longevity cardiomyocytes through its regulation on AMPK and its downstream targets. AKT regulates cell growth and survival, and it exerts antiapoptotic effects by phosphorylating target proteins through various downstream pathways. mTOR is a downstream effector of AKT that mediates cellular nutrient metabolism and aging [29]. The AMPK/AKT/mTOR signaling pathway is a classical autophagy pathway in various tissues such as the heart, liver, and endothelium [30,31]. This study has demonstrated that Rap1GAP increases the phosphorylation of AKT and mTOR by regulating AMPK phosphorylation, reduces the phosphorylation level of p70s6k, which is a target downstream of mTOR, and finally inhibits autophagy in cardiomyocytes. In summary, this study demonstrates that Rap1GAP is a new mediator of Ang II-induced cardiomyocyte hypertrophy through its regulating on cardiac autophagy, oxidative stress, and apoptosis by mediating the AMPK/AKT/mTOR signaling pathway. It might be a potential therapeutic target for the treatment of cardiac hypertrophy and heart failure, which is worth further investigations. However, we also realized the limitations of this study. Although the present data show the involvement of Rap1GAP in Ang II-induced cardiomyocyte hypertrophy, more specific animal models including cardiomyocyte-specific Rap1GAP knockout mouse are needed to further determine the critical roles of this new player in the pathogenesis of cardiac hypertrophy and other cardiovascular diseases. Based on our findings, we hypothesized the potential mechanisms of the role of Rap1GAP in Ang II-induced cardiomyocyte hypertrophy, as shown in Figure 9. However, our ongoing experiment of mass spectrometry analysis might be able to reveal more significant signaling pathways that are involved in the functional roles of Rap1GAP in cardiovascular diseases. Since Rap1GAP is a newly discovered protein and this report is the first showing its functions in the cardiomyocytes, there are still a lot of unknowns about this protein in the heart and several projects are under planning in our lab to further investigate the regulation and cardiac functions of this new protein. Data Availability All data sets generated/analyzed for this study are included in the manuscript and the Supplementary Files. Conflicts of Interest The authors declare that they have no conflict of interest.
2021-05-01T05:15:09.236Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "c051ff08e4101070a4ecc0228979422c525d5f9e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/7848027", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c051ff08e4101070a4ecc0228979422c525d5f9e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231945883
pes2o/s2orc
v3-fos-license
Social connectedness and negative affect uniquely explain individual differences in response to emotional ambiguity Negativity bias is not only central to mood and anxiety disorders, but can powerfully impact our decision-making across domains (e.g., financial, medical, social). This project builds on previous work examining negativity bias using dual-valence ambiguity. Specifically, although some facial expressions have a relatively clear negative (angry) or positive valence (happy), surprised expressions are interpreted negatively by some and positively by others, providing insight into one’s valence bias. Here, we examine putative sources of variability that distinguish individuals with a more negative versus positive valence bias using structural equation modeling. Our model reveals that one’s propensity toward negativity (operationalized as temperamental negative affect and internalizing symptomology) predicts valence bias particularly in older adulthood when a more positive bias is generally expected. Further, variability in social connectedness (a propensity to seek out social connections, use those connections to regulate one’s own emotions, and be empathic) emerges as a notable and unique predictor of valence bias, likely because these traits help to override an initial, default negativity. We argue that this task represents an important approach to examining variability in affective bias, and can be specifically useful across the lifespan and in populations with internalizing disorders or even subclinical symptomology. Results Valence bias. As in previous work, there was a wide range of inter-participant variability in valence bias for both the faces and scenes (see Fig. 1; higher scores are associated with a more negative bias). Interestingly, the ratings of ambiguous faces were more negative and also more variable across participants than ratings of scenes. As expected, valence bias across stimuli was significantly positively correlated, r = 0.22 (see Table 1); however, the correlation was small in magnitude. Associations among measures. Correlations among concurrent measures were all in the expected direction ( Table 1). The strongest association among indicators of negative affect was between neuroticism (NEON) and trait anxiety (STAIT; r = 0.79), and the weakest was between depression symptoms (BDI) and difficulties in emotion regulation (DERS; r = 0.53). The strongest association among indicators of social connectedness was between empathy (EQ) and extraversion (NEOE; r = 0.49), and the weakest was between empathy (EQ) and interpersonal emotion regulation (IRQ; r = 0.36). As expected, average valence bias (i.e., higher values represent a more negative bias) was significantly positively correlated with depression symptoms (r = 0.07), neuroticism (r = 0. 16), and state (r = 0.09) and trait (r = 0.11) anxiety, and negatively correlated with interpersonal emotion regulation (r = − 0.11), extraversion (r = − 0.15) and age (r = − 0.18). Table 1. Correlations among dependent variables. ***p < 0.001 (two-tailed), **p < 0.01 (two-tailed), *p < 0.05 (two-tailed). BDI Beck Depression Inventory, DERS Difficulties in Emotion Regulation Scale, NEON neuroticism, STAIS state anxiety, STAIT-Trait Anxiety, EQ Empathy Quotient, IRQ Interpersonal Regulation Questionnaire, NEOE Extraversion, VB Valence Bias-higher scores associated with a more negative bias. The three factors demonstrated adequate discriminant validity; negative affect was inversely correlated with social connectedness (r = − 0.52, p < 0.001) and positively with a more negative valence bias (r = 0.19, p < 0.001), and social connectedness was inversely correlated with a more negative valence bias (r = − 0.25, p < 0.001). We also examined correlations between the three latent factors and age which were significant (see Table 2). Interestingly, although there was a strong inverse correlation between negative affect and social connectedness, both were also inversely correlated with age. As predicted, age was also inversely correlated with a more negative valence bias (i.e., older adults had a more positive valence bias). Results of structural equation modeling. After establishing adequate fit of the measurement model, we tested the full hypothesized model with negative affect, social connectedness, and age (z scored for interpretation) as predictors of valence bias, but omitted the interaction between negative affect and age. This step is necessary in the context of latent moderated structural equation models which do not produce traditional model fit indices. Maslowsky et al. 57 recommend evaluating the global fit of the model without the interaction first, and then adding the interaction to examine its significance. The global fit of this model was adequate (CFI = 0.931, TLI = 0.903, RMSEA = 0.066, 90% confidence interval = 0.058-0.073, SRMR = 0.054). Next, we added the latent interaction between negative affect and age to the model which was significant, b = 0.17 (0.06), p = 0.010. Note that because there were 113 participants who had missing scores on both variables used to define the latent interaction (i.e., negative affect and age), those cases were not included in this model. Figure 2 shows the final model results (see Supplementary Table S3 for the unstandardized model solution). As expected, social connectedness was inversely associated with a more negative valence bias when controlling for negative affect and age, β = − 0.26, p = 0.002. There was also a significant interaction between negative affect and age predicting valence bias, β = 0.16, p = 0.01, such that there was a stronger positive association between negative affect and a more negative valence bias for older individuals. To probe the significant interaction, we conducted a regions of significance analysis such that conditional effects of negative affect on valence bias were estimated at all observed levels of age, and the significance of those conditional effects were examined. Negative affect was positively associated with a more negative valence bias, controlling for social connectedness, for individuals age 51.6 and older [i.e., beginning at 1.95 SDs above the mean of age, M (SD) = 28.06 (12.07); Fig. 3]. Finally, based on some work showing that older age is associated with a greater investment in more rewarding and meaningful social relationships 58 , we explored a supplementary model that included the latent interaction between social connectedness and age, but this interaction was not significant, b = − 0.14 (0.09), p = 0.094. Discussion There are vast individual differences in the tendency to interpret dual-valence ambiguity as having a more positive or negative meaning. This valence bias is not only central to mood and anxiety disorders, but can powerfully impact our decision-making across domains (e.g., financial, medical, social), and thus have dramatic and widespread consequences for many aspects of our lives. To examine the underlying sources of variability that might distinguish individuals with a more negative versus positive valence bias, we conducted an analysis across fourteen experiments. Broadly, we found that both negative affect and social connectedness appear to uniquely explain individual differences in valence bias. This finding is consistent with extensive research that has demonstrated an important link between negativity bias and mood and anxiety disorders 1-3 . We have built on this link by demonstrating a relationship between valence bias and negative affect, which was conceptualized as a latent measure representing temperamental negative affect and internalizing symptomology. Although negative affect was significantly correlated with a more negative valence bias (Table 2), upon examining its unique association with bias (i.e., controlling for social connectedness), it was only associated with valence bias for older adults. In other words, older adults (i.e., around 51 years of age and up) that were higher in negative affect showed a more negative valence bias independent from social connectedness (i.e., no such association was evident in younger adults). Interestingly, prior work on valence bias in younger adults has demonstrated that the initial or default interpretation of ambiguity is negative 23,[43][44][45] , and that positive interpretations require an additional regulatory process that overrides this initial negativity 47 . Other related work has shown that children show a more negative bias than adults putatively because regulatory mechanisms responsible for producing a positive bias are weaker in children Table 2. Correlations among latent variables and age. ***p < 0.001 (two-tailed), **p < 0.01 (two-tailed), *p < 0.05 (two-tailed). www.nature.com/scientificreports/ Figure 2. Results of the final model. Social connectedness was inversely associated with a more negative valence bias when controlling for negative affect and age. There was also evidence for a significant interaction between negative affect and age predicting valence bias, such that there was a stronger positive association between negative affect and a more negative bias for older individuals. Values along each arrow are standardized estimates, with associated standard error in parentheses, and significance level represented by asterisks. . Depiction of the significant interaction between negative affect and age. The straight line represents conditional effects of negative affect on valence bias at different ages and the curved lines reflect the 95% confidence interval for those effects. Negative affect was positively associated with a more negative valence bias, controlling for social connectedness, for individuals age 51.6 and older (i.e., beginning at 1.95 SDs above the mean of age). www.nature.com/scientificreports/ than adults 59 , see also 60 . As such, it could be that, while a negativity bias is associated with dysfunction (depression, anxiety), it is perhaps the relative failure to develop mechanisms for regulating or overriding the negativity that may serve both to maintain the negativity bias into adulthood and to increase the risk for disorders (see 61 ). Although there is an overall association between negative affect and valence bias in the current work, we found that negative affect was not uniquely predictive of valence bias after controlling for social connectedness in young adults. This finding might be due to their default negativity. In other words, when the default response is negative, the impact of negative affect (temperament and symptomology) may be diminished. In contrast, in older adults, a positive valence bias is more likely 43,62 and may even represent the new default 63 . Thus, in this population, increases in negative affect appear to play a more crucial role in impacting valence bias. Alternatively, it could be that the interaction with age is due to some developmental process whereby, as individuals increasingly engage with more stimuli, there are more opportunities for their underlying proclivity toward negativity to manifest in a more negative valence bias, thus resulting in a more robust and stable bias. Future longitudinal work will be needed in order to disentangle these effects across different stages of the lifespan. Indeed, it could be that there are important mediators of the relationship between valence bias and negative affect (or negativity bias per se) in young adulthood (see 64 ). Further, implementing this developmental framework would allow us to track within-person developmental processes and examine if and how this process strengthens over time. Having said that, valence bias is measured along the full valence spectrum, from negative to positive; thus, we would be remiss to discuss the individual differences that support a negativity bias without mentioning the ramifications for positivity bias. We found that variability in social connectedness, which was conceptualized as a latent measure representing empathy, extraversion, and interpersonal emotion regulation (i.e., the tendency to rely on others in order to regulate one's own emotions), was inversely associated with a more negative valence bias. In other words, individuals higher in social connectedness showed a more positive bias. Notably, although social connectedness is predominantly conceptualized as a facet of positive emotionality, low levels of social connectedness are significantly associated with greater negative affect (see Table 2). Results of the SEM analyses suggest that only the truly positive features of social connectedness (unique from negative affect) are predictive of valence bias. As briefly mentioned above, one might predict that variability in positive affect may be more important in predicting valence bias than the variability in negative affect, given that the negative bias represents the default response (in young adults). In other words, although there are important individual differences that reinforce a more negative bias, the ability to override the default negativity appears to more critically rely on variability in social connectedness that helps to downregulate or overcome the tendency to view dual-valence ambiguity in a negative light. In contrast, a low propensity toward experiencing negative emotions (low sensitivity to distressing stimuli) may be insufficient to demonstrate a positive bias, at least in young adulthood. Future longitudinal work could examine the development of these social connectedness measures over the course of one's lifetime. This work could also prove useful in improving our ability to identify individuals who are at risk for maintaining this negativity bias and developing depression or anxiety (e.g., people low in social connectedness and therefore lower in positivity bias). Given the theoretical connection between valence bias and negativity bias and its associated symptomology, we explored the bivariate associations between the average valence bias and individual measures within each latent construct (negative affect, social connectedness) and age. Consistent with our predictions, a more negative valence bias was positively correlated with four of the five individual measures within the negative affect construct: depression symptoms, neuroticism, and state and trait anxiety. Further, it was correlated with two of the three individual measures within the social connectedness construct: interpersonal emotion regulation and extraversion. It is intriguing that valence bias did not show a relationship with empathy, as in previous work 23 , future research will be needed to further explore this link. As predicted, a more negative valence bias was inversely correlated with age. This finding is consistent with prior work demonstrating that increasing age is associated with a more positive valence bias 43,62 . Interestingly, although negative affect was inversely related to social connectedness, both of these constructs were inversely related to age (i.e., older adults showed decreases in negative affect and social connectedness). Future work targeting this aging population will be helpful to explicate the mechanisms underlying these changes, specifically as they relate to a shift toward a more positive valence bias. In sum, although extant behavioral, psychophysiological, and neuroimaging research has provided important information about the valence bias, it has fallen short of elucidating the mechanisms underlying this variability. Extensive research has focused on dispositional negativity (see 65 for a review), but here, we provide a model suggesting that the variance related to positivity (that may specifically help to overcome a negativity bias) is more sensitive to the valence bias than an approach that focuses on negativity. In other words, social connectedness emerged as a unique predictor of valence bias, and variability in negativity was an important predictor only in later life, when a positive bias is generally expected. In other words, a heightened sensitivity to subtle indicators of positivity and interpersonal connections might facilitate the override of default tendencies toward negativity, but a low propensity toward negative emotions (e.g., low depression/anxiety symptomology) is insufficient for a positive bias. These results are consistent with previous work showing that specific traits are associated with the propensity to experience negative and positive affect, respectively 36 , 39 ), and differences in neural responses to emotion 34,35 . However, the results build on existing research by establishing a methodological advance in studying valence bias that implements dual-valence ambiguity (both positive and negative information are present in the stimuli). www.nature.com/scientificreports/ These findings provide further evidence for the idea that this bias represents a relatively stable trait-like indicator of underlying individual differences 22,23 . As such, we argue that this task represents an important approach for future work examining variability in affective bias, and can be useful in research across age and in populations with related affective disorders or even subclinical symptomology. Indeed, there are a variety of benefits of this performance-based measure of valence bias. First, although much of the literature on negativity bias has focused on comparing patients and controls (e.g., 66 ), our task has demonstrated a dimensional association between bias and subclinical depression symptomology 61 . Second, our task shows high test-retest reliability across one year, indicating that the bias is a trait-like difference across individuals 23 . It also generalizes across ratings of different stimuli, including ambiguously valenced (surprised) facial expressions, scenes, and emotionally laden words 22 , 67 . Third, it engages an amygdala-prefrontal cortex (PFC) circuitry 21,47,61 , similar to that implicated in disorders characterized by a negativity bias and emotion dysregulation 66,68 . Fourth, this task is developmentally appropriate-it has been leveraged for studying valence bias and its association with depressive symptomology and emotion regulation across the lifespan (ages 6-88 years), from children-including those experiencing early life stress [59][60][61] to older adults 43,62,63 . Finally, it is sensitive to a range of contextual manipulations, including but not limited to stress 69 and exercise 70 . As a result, this task offers a novel contribution to research on negativity bias using an approach that controls the information perceived by the participants (i.e., viewing the same dual-valence images) and enables a stable measure of bias. Future work could also examine the functional outcomes of these effects. For example, given that differences in valence bias have important consequences for social and emotional function (e.g., depression/anxiety symptomology), further research should be directed at examining the utility of this model in distinguishing individuals with normal versus aberrant function. Results of this research can inform interventions by identifying individuals with heightened resilience or increased risk for affective disorders. Indeed, a positive valence bias is associated with resilience in the face of stress 71 . Further, ongoing work shows promise for emotion regulation training and mindfulness-based stress reduction see 24 , both strategies used as interventions for mood and anxiety disorders, in promoting a more positive valence bias. Given that social connectedness is a unique predictor of valence bias, and appears to support overriding the initial negativity, a specific focus on building social connections (e.g., training in interpersonal emotion regulation or loving kindness meditation; see 49 ) might be explored as a putative intervention for ameliorating a chronic negativity bias. Table S1). A total of 326 participants were removed from the MTurk sample for the following reasons: 183 were removed because they failed to complete at least 75% of the experiment, an additional 104 were removed because they completed the experiment in less than 300 s (which was determined to not be feasible for the number of trials included in the study), four were removed because they did not complete the valence bias task, which was crucial for the analyses in the current report, and finally, 35 were removed for having inaccurate ratings in the valence bias task (we use a standard threshold of 60% accuracy as in previous work 23,43 ). In addition, 96 participants that completed experiments in the lab were excluded: 64 for having inaccurate ratings in the valence bias task, and an additional 32 due to technical issues. The final sample resulted in 1390 adult participants (754 female, 562 male, 74 not reported). Age data was lost for 113 participants, but for the remaining sample, the age range was 17-88 years [mean (SD) age = 28.06 (12.07)]. Race data was lost for 73 participants, but for the remaining sample, there were 2 identifying as American Indian/Alaska Native, 117 as Asian, 61 as Black/ African-American, 895 as White, 4 as Multiracial, 46 as Other, and 192 as Unknown/Choosing not to identify. Nebraska-Lincoln and through Amazon's Mechanical Turk (MTurk) were included in this analysis (Supplementary None of the participants were aware of the purpose of the experiment. All participants had normal or corrected-to-normal vision, and they were compensated for their participation through monetary payment or course credit. Before each session, written or electronic informed consent was obtained from all participants, with a waiver of informed consent from parents/legal guardians for minors that were enrolled as students at the University of Nebraska-Lincoln. All of the procedures were carried out in accordance with the relevant guidelines and regulations, and approved by the ethics committee of University of Nebraska-Lincoln for the Protection of Human Subjects. The only criteria used for selecting participants to include in the report was that they completed the valence bias task, and the same set of surveys, for use as the dependent variables in our analyses. Procedures. Across all experiments, participants first completed a valence bias task. In this task, images of faces from the NimStim 72 , Karolinska Directed Emotional Faces 73 and Umea University Database of Facial Expressions 74 sets and scenes from the International Affective Picture System (IAPS 75 ), were presented. The faces included angry, happy, and surprised expressions, and the IAPS scenes were those previously identified as having a negative, positive, or ambiguous valence,ambiguity was defined in pilot work as those images that have a high standard deviation in valence ratings (i.e., individuals were in the most disagreement about their valence 22 ). For both stimuli, previous work found that, unlike ratings of clearly negative and positive stimuli, there is a wide range of inter-participant variability in ratings of ambiguous faces and scenes 22,23 . Participants www.nature.com/scientificreports/ set of images (faces or IAPS, which were presented in separate blocks) presented in a pseudorandom order, and block order was counterbalanced between participants. Task data were collected in several software packages: Eprime (Psychology Software Tools, Pittsburgh, PA, USA), MouseTracker 76 , and Qualtrics (Qualtrics, Provo, UT). Measures. Valence bias. The dependent measure used to quantify valence bias was percent negative ratings, which is calculated as the percent of trials on which a participant viewed an emotionally ambiguous stimulus and rated it as negative, out of the total number of trials for that condition (excluding omissions). Notably, previous work has demonstrated that the valence bias generalizes across faces and scenes; the same people that tend to rate surprised faces as positive also tend to rate ambiguous scenes as positive 22 . Given that these two valence bias scores should represent some stable, trait-like measure of the tendency to interpret ambiguity as having a more positive versus negative meaning, valence bias was represented by a latent construct using these two scores. This approach provides a more stable representation of valence bias that is common across the face and scene contexts. Having said that, to explore the bivariate correlations between valence bias and the other individual difference measures, we also created a bias score using the average of the face and scene ratings. This average score represents both the common and unique variance among the two measures. While most of the participants completed valence bias tasks with both stimuli, a subset of participants completed the task with only faces,for those latter individuals, the average valence bias score was treated as missing. Negative affect. We administered a series of questionnaires to assess individual differences in negative affect. In order to account for missing data for any one question on a given questionnaire, which was minimal (< 1%), replacement scores were calculated with mean imputation by scale, and after reverse coding items as needed. Depression symptomology. Scores were extracted from the Beck Depression Inventory-II (BDI 77 ), one of the most widely used measures to assess severity of depression symptomology. This measure has demonstrated reliability and validity in a number of studies 77 . Each of the 21 items consists of four self-evaluative statements, ranging in severity, for which respondents select the statement that best describes their symptoms from the last 2 weeks. Ratings from each item are summed, with a possible range of 0-63. In the present study, internal consistency was excellent (Cronbach's alpha = 0.898). Difficulty in emotion regulation. Scores were extracted from the Difficulties in Emotion Regulation Scale (DERS 78 ) to assess typical levels of difficulties in emotion regulation. This measure is based on a clinically-useful conceptualization of emotion regulation that was designed to be applicable to a wide variety of psychological difficulties and relevant to treatment development for clinical populations 79 . The DERS is a 36-item, factor-analytically derived questionnaire that comprises six subscales representing unique dimensions of emotion regulation difficulties: (1) nonacceptance of emotional responses, (2) difficulty engaging in goal-directed behavior, (3) impulse control difficulties, (4) lack of emotional awareness, (5) limited access to emotion regulation strategies, and (6) lack of emotional clarity. The DERS has demonstrated excellent convergent and discriminant validity. Respondents rate how often each item applies to them using a 5-point Likert scale (1 = Almost Never, 5 = Almost Always). Ratings from each item are summed to create a composite score across the six factors, with a possible range of 36-180. In the present study, internal consistency was excellent (Cronbach's alpha = 0.948). Neuroticism. Scores were extracted from the NEO Five-Factor Inventory (NEO-FFI 80 ) to assess neuroticism (NEON). The NEO-FFI is one of the most widely used instruments to assess personality. This 60-item questionnaire includes scales to measure the big five personality traits, and respondents rate each item on the degree to which it is true or not true of them, using a 4-point Likert scale (0 = Strongly Agree, 3 = Strongly Disagree). Due to differences in the survey procedure across studies, some participants only completed a subset of the 60-item questionnaire (e.g., neuroticism and extraversion scales only). Although the NEO-FFI has had multiple revisions since its original development in 1990, adequate reliability has been consistency demonstrated over the years 81 . The NEON scale consists of 12 items that were summed, with a possible range of 0-48. In the present study, internal consistency was excellent (Cronbach's alpha = 0.862). State and trait anxiety symptoms. Scores were extracted from the State-Trait Anxiety Inventory (STAI 82 ) to assess state and trait anxiety symptomology. State Anxiety (STAIS) is measured with a 20-item questionnaire in which respondents rate each item on the degree to which it is true or not true of them, using a 4-point Likert scale (1 = Not At All, 4 = Very Much So). Ratings from each item are summed, with a possible range of 20-80. Trait Anxiety (STAIT) is measured similarly, except respondents rate each item on the degree to which it is true or not true of how they "generally feel". In the present study, internal consistency was excellent (Cronbach's alpha: STAIS = 0.921, STAIT = 0.935). Social connectedness. We administered a series of questionnaires to assess individual differences relevant to social connectedness. Again, mean imputation was used to account for any missing data on a given questionnaire (< 1%). Empathy. Scores were extracted from the abridged Empathy Quotient (EQ 83 ) to assess variability in empathy. The EQ is a relatively recent measure of empathy that is unique in that it was explicitly designed to have clinical application 84 . The abridged EQ includes 22 items that respondents rate on the degree to which it is true or not Interpersonal emotion regulation. Scores were extracted from the Interpersonal Regulation Questionnaire (IRQ 85 ) to assess variability in one's recruitment of social resources to up-and down-regulate one's own emotions. This questionnaire includes measures of both tendency to recruit these social resources and efficacy with which these resources are perceived to be helpful, each with respect to managing both positive and negative emotions. The IRQ is a 16-item, factor-analytically derived questionnaire that is comprised of four subscales representing these unique dimensions: (1) negative emotions-tendency, (2) negative emotions-efficacy, (3) positive emotions-tendency, and (4) positive emotions-efficacy. The IRQ has demonstrated excellent convergent and discriminant validity, and is distinct from measures of negative expressivity, intrapersonal emotion regulation ability, and extraversion. Respondents rate each item on the degree to which it is true or not true of them, using a 7-point Likert scale (1 = Strongly Disagree, 7 = Strongly Agree). Ratings from each item are summed to create a composite score across the four factors, with a possible range of 16-112. In the present study, internal consistency was excellent (Cronbach's alpha = 0.932). Extraversion. Scores were extracted from the NEO Five-Factor Inventory (NEO-FFI 80 ) to assess extraversion (NEOE). As previously described, the NEO-FFI is one of the most widely used instruments to assess personality. The NEOE scale consists of 12 items that were summed, with a possible range of 0-48. In the present study, internal consistency was excellent (Cronbach's alpha = 0.855). Age and potential controls. A series of demographic characteristics (e.g., education, sex, household income, race, ethnicity, past history of mental or physical illness, medication use) as well as methodological differences across studies (i.e., presentation duration, number of trials, response method-mouse versus keyboard, study location-online versus laboratory) were examined. However, only age consistently correlated with both measures of valence bias and at least one of the predictors (see Tables 1, 2) and, as such, none of the other variables were included as controls. And based on research suggesting that there may be important age-related changes in the nature of a default negative valence bias across the adult lifespan, ultimately, age was included as an interactive effect with negative affect rather than a control (see below). Statistical analyses. Data were analyzed using Mplus 86 . Because of missing data for some surveys across different experiments, some scores were missing. Covariance coverage for the model ranged from 0.137 to 0.979, but a large majority (> 80%) exceeded 0.40. Table 3 reports sample size for each variable and Supplementary Table S2 reports covariance coverage. Further, full information maximum likelihood estimation (FIML 87 ) is considered an optimal approach to addressing missing data patterns of this magnitude. Thus, by implementing FIML, we capitalized on all available data from this large dataset. Additionally, robust standard errors were estimated using the MLR estimator in Mplus to address any violations of univariate or multivariate normality. We applied a latent moderated structural equations method 57 for testing moderation hypotheses. Multiple indices were used to assess global model fit. The comparative fit index (CFI > 0.90), Tucker-Lewis Index (TLI > 0.90), root-mean-square error of approximation (RMSEA < 0.08), and standardized root-mean residual (SRMR < 0.08) are reported. Once a model was determined to adequately fit the data, parameter estimates were interpreted. Three latent variables were modeled. The metric of the latent variables was set by fixing the variance of the latent variables to 1.00,thus, latent scores were standardized. First, scores from the BDI, DERS, NEON, STAIS, and STAIT were modeled as indicators of a latent variable representing negative affect. Second, scores from the www.nature.com/scientificreports/ EQ, IRQ, and NEOE were modeled as indicators of a latent variable representing social connectedness. Third, valence bias for faces (VB-faces and scenes) (VB-scenes) were modeled as indicators of a latent variable representing valence bias. In this model, the latent scores of negative affect and social connectedness were used to predict valence bias. In addition, we explored interactive effects of age and negative affect, predicting that age might play a more critical role in the relationship between negative affect and valence bias given prior work demonstrating that increasing age is associated with a more positive valence bias 43,62 . In other words, we expected that the association between negative affect and valence bias would vary as a function of age such that negative affect might have a stronger impact on valence bias in older age when a more positive bias might otherwise be expected. In contrast, we did not have predictions about the effects of social connectedness as a function of age. Thus, we added the interaction between negative affect and age as a predictor of valence bias.
2021-02-18T06:17:04.371Z
2021-02-16T00:00:00.000
{ "year": 2021, "sha1": "c58c9f1566396bc746812ad929a7d9ae1cceffaa", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-80471-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ab7a3f47bed2a7e7fcab54b26e9cb462b25a193", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
1741756
pes2o/s2orc
v3-fos-license
PARP Inhibition Restores Extrinsic Apoptotic Sensitivity in Glioblastoma Background Resistance to apoptosis is a paramount issue in the treatment of Glioblastoma (GBM). We show that targeting PARP by the small molecule inhibitors, Olaparib (AZD-2281) or PJ34, reduces proliferation and lowers the apoptotic threshold of GBM cells in vitro and in vivo. Methods The sensitizing effects of PARP inhibition on TRAIL-mediated apoptosis and potential toxicity were analyzed using viability assays and flow cytometry in established GBM cell lines, low-passage neurospheres and astrocytes in vitro. Molecular analyses included western blots and gene silencing. In vivo, effects on tumor growth were examined in a murine subcutaneous xenograft model. Results The combination treatment of PARP inhibitors and TRAIL led to an increased cell death with activation of caspases and inhibition of formation of neurospheres when compared to single-agent treatment. Mechanistically, pharmacological PARP inhibition elicited a nuclear stress response with up-regulation of down-stream DNA-stress response proteins, e.g., CCAAT enhancer binding protein (C/EBP) homology protein (CHOP). Furthermore, Olaparib and PJ34 increased protein levels of DR5 in a concentration and time-dependent manner. In turn, siRNA-mediated suppression of DR5 mitigated the effects of TRAIL/PARP inhibitor-mediated apoptosis. In addition, suppression of PARP-1 levels enhanced TRAIL-mediated apoptosis in malignant glioma cells. Treatment of human astrocytes with the combination of TRAIL/PARP inhibitors did not cause toxicity. Finally, the combination treatment of TRAIL and PJ34 significantly reduced tumor growth in vivo when compared to treatment with each agent alone. Conclusions PARP inhibition represents a promising avenue to overcome apoptotic resistance in GBM. Introduction Certain cancers display a highly treatment resistant phenotype. A prototype of these tumors represents Glioblastoma (GBM), which despite vast treatment efforts carries a grim prognosis as reflected by a median overall survival of less than 15 months [1]. One mechanism by which GBM can evade therapy is resistance to apoptotic cell death. Restoring apoptotic sensitivity is therefore of paramount importance to render GBMs sensitive to drug therapy. One way to make treatment resistant cancers amenable to drug treatment is the administration of combinatorial drug regimens. Such treatments may overcome primary and acquired resistance to therapy. Virtually all GBMs develop secondary treatment resistance after administration of either Temozolomide (TMZ), radiation or the combination of TMZ + radiation. Since the DNA repair enzyme poly(ADPribose) polymerase (PARP) is expressed at higher levels in tumor cells when compared to benign tissues and cells [2,3], PARP may therefore represent a tumor specific treatment target. Moreover, while assisting rapid dividing cancer cells with DNA-repair, PARP counteracts apoptotic cell death. Consistent with this idea, interference with PARP by RNA silencing or PARP inhibitors render cancer cells more prone to the cytotoxic effects of DNA-damage inducing treatment modalities, such as radiation, topoisomerase inhibitors or alkylating reagents (i.e. Temozolomide) [4,5]. We focus on the PARP inhibitor, Olaparib (Olap, AZD-2281), which penetrates the blood-brain barrier and has already reached clinical trials in GBM patients. Our data demonstrate that Olaparib overcomes apoptotic resistance and sensitizes GBM cells for death receptormediated apoptosis induced by TRAIL (Tumor necrosis factor-related apoptosisinducing ligand) through up-regulation of TRAIL receptor 2 (DR5) independent of their TP53 status. In addition, PARP-1 specific siRNA, as well as PJ34 [6], another pharmacological PARP inhibitor, also enhanced extrinsic apoptosis in GBM cells in vitro and in vivo. Since TRAIL is known for its tumor specificity, the combination treatment of PARP inhibitors with TRAIL may be an ideal drug combination therapy with potential little side effects. Ethics statement All procedures were in accordance with Animal Welfare Regulations and approved by IACUC Columbia University Medical Center. The study was reviewed and approved by the institutional review board at Columbia University Medical Center. Human tissue samples were anonymized prior to access by the researchers. Antibodies and Western Blotting Analysis of cellular viability, apoptosis and cell cycle 3(4,5-dimethyl-thyazoyl-2-yl)2,5 diphenyltetrazolium bromide (MTT) colorimetric assays were conducted as previously described [12]. For apoptosis determination, cells were stained and analyzed with Annexin V (FITC)/Propidium iodide as previously described [12]. For cell cycle analysis, cells were harvested and fixed in ethanol. After fixation overnight, cells were stained with Propidium iodide staining solution (# 4087, Cell Signaling Technology). For analysis of loss of mitochondrial membrane potential cells were stained with JC-1 and analyzed by flow cytometry as described [11]. Neurosphere formation assay GS9-6 cells were mechanically dissociated and plated at a density of 500 cells per well in 12 well plates. Twenty-four hours later, cells were treated with TRAIL and Olaparib, singly or in combination. Following 10 days, neurosphere formation was assessed by counting the number of neurospheres that harbor at least 25 cells per sphere as described in [13,14]. Experiments were performed at least in duplicates and statistical analysis was performed. Subcutaneous xenografts U87 cells were grown as subcutaneous tumors in 6-8 week old SCID SHO mice. To establish the tumors and the respective treatment groups, U87 cells were pretreated with DMSO, TRAIL (100 ng/ml), PJ34 (40 mM) or the combination of both reagents for 2 hours to form 4 different treatment groups. For each treatment condition/group 3 million viable cells for the establishment of each tumor were injected subcutaneously. After injection, animals were monitored daily for the appearance of tumors. Tumors were measured with a caliper and sizes calculated according to the standard formula: (length * width 2 )*0.5. Once tumors reached a size of more than 1 cm 3 animals were euthanized. All procedures were in accordance with Animal Welfare Regulations and approved by Columbia IACUC. Statistical analysis Data were analyzed by two-sided unpaired t-tests, using GraphPad Prism software, or one-way analysis of variance followed by Tukey's Multiple Comparison Test. Values are provided as mean ¡ SD or mean ¡ SEM of replicates of a representative experiment out of at least 2 independent determinations. A p value of less than 0.05 (p,0.05) was accepted as statistically significant. PARP-1 displays a heterogeneous expression pattern in GBM tissue specimens, GBM cell lines and GBM neurosphere cell cultures To determine if PARP-1 is a suitable target for the treatment of malignant glioma we assessed the expression levels in GBM cells and 34 GBM tissue specimens. All GBM tissue specimens demonstrated detectable PARP staining, which had a predominantly nuclear localization with some faint staining in the cytoplasm ( Figure A in S1 Fig.). About 68% of the tumors revealed moderate expression, whereas 32% showed strong expression (S1 Table). The staining intensity was heterogeneous among the different tumors as well as within a specific tumor. Normal brain tissue showed less PARP staining ( Figure A in S1 Fig.). Residing glial cells demonstrated detectable PARP-1 expression. Neurons showed cytoplasmic and nuclear staining, which was mostly confined to the nucleolus. Next, the protein expression levels of PARP-1 were determined being lowest in U87 and higher in neurosphere cultures with the exception of GS9-6, which showed lower protein expression levels of PARP-1 compared to NCH644 and NCH690, respectively (Figure B in S1 Fig.). Inhibition of PARP-1 by Olaparib decreases proliferation of GBM cells We tested whether the PARP-1 inhibitor Olaparib (Figure C in S1 Fig.) is capable of apoptosis induction by itself. LN229 (higher levels of PARP-1) and U87 (lower levels of PARP-1) cells were treated with increasing concentrations of Olaparib. Olaparib elicited a minimal increase in apoptosis in LN229 cells 72 h after treatment ( Figure D in S1 Fig.). However, Olaparib had a significant effect on the cell cycle progression, demonstrating a G2/M arrest in LN229 cells (Figure D in S1 Fig.). In contrast, there was little induction of apoptosis as indicated by a low proportion of cells in the sub-G1 fraction. We also treated LN229 and U87 cells with increasing concentrations of Olaparib, resulting in a dose-dependent inhibition of proliferation which was more accentuated in LN229 cells (Figure E in S1 Fig.), consistent with their higher expression of PARP-1 protein. In addition, U87-EGFRvIII as well as the stem cell-like neurosphere culture, GS9-6, were treated with increasing concentrations of Olaparib and revealed a moderate loss in cellular viability (Figure E in S1 Fig.). The combination of Olaparib and TRAIL cooperates to induce loss of cellular viability in GBM cells and triple-negative breast cancer cells To determine if Olaparib is capable of overcoming apoptotic resistance several established cell lines with different genetic backgrounds were treated with TRAIL, Olaparib or the combination of both drugs. Suboptimal dosages of TRAIL had mild to moderate effects on cellular viability in U87 (88.46%¡0.2928), U373 (53.58%¡0.7463) and LN229 GBM cells (81.33%¡9.783) ( Fig. 1A-C). Olaparib on its own also elicited mild to moderate effects on cellular viability in U87 (61.56%¡1.279), U373 (53.58%¡0.7463) and LN229 (81.33¡9.783) GBM cells ( Fig. 1A-C). However, the combination of both compounds caused a greater reduction of cellular viability in U87 (19.58%¡1.094), U373 (42.29%¡1.493) and LN229 (33.19%¡1.475) GBM cell lines ( Fig. 1A-C). In all three GBM cell lines the combination therapy resulted in a statistically significant (p,0.05) decrease in cellular viability when compared to the single agent treatments. It is noteworthy that the combination treatment does not require the presence of a functional p53 protein since U373 and LN229 harbor a mutated form of p53. To show that the favorable effect of the drug combination of TRAIL/Olaparib is not restricted to GBM we treated the triple-negative breast cancer cell line MDA-MB-468. This cell line lacks the expression of estrogen, progesterone and HER2 or the combination of both reagents for 48 hours. Thereafter, MTT assays were performed to determine cellular viability. E-F) GBM neurosphere culture (GS9-6) was treated with suboptimal dosages of TRAIL, Olaparib or the combination of both reagents and assessed for neurosphere formation 2 weeks after plating. G-H): U251 and GS9-6 (GBM neurosphere culture) cells were treated with suboptimal dosages of TRAIL, Olaparib or the combination of both reagents and stained with Annexin-V (FITC-conjugated) and Propidium iodide 24 hours after treatment. Cells were analyzed by flow cytometry to determine receptors and therefore is a model system of another current treatment challenge in oncology. MDA-MB-468 showed a minor response to either TRAIL (96.80%¡0.05955) or Olaparib (92.31%¡4.426), whereas the combination of both reagents (4.79%¡5.393) induced a decrease in cellular viability in a synergistic manner (Fig. 1D). The combination of Olaparib and TRAIL is effective in lowpassage ex vivo GBM cultures Stem cell-like glioma cells are known to be responsible for the rapid recurrence of GBM and for their resistance to therapy. We tested if the combination treatment of TRAIL and Olaparib affects stem cell-like glioma cells and if neurosphere formation is impaired by the combination treatment. While TRAIL and Olaparib used as individual agents exerted minor effects on neurosphere formation, the combination of the two drugs significantly impaired the formation of neurospheres ( Fig. 1E-F), suggesting that this combination treatment may affect the glioma stem cell-like fraction. The combination of Olaparib and TRAIL causes enhanced apoptotic cell death with enhanced activation of initiator and effector-caspases To test the hypothesis that the combination of TRAIL/Olaparib enhances apoptosis, U251 GBM cells were treated with vehicle, TRAIL, Olaparib or the combination of both and stained with Annexin V and Propidium iodide prior to analysis by flow cytometry, which showed enhanced apoptosis in the combination treatment when compared to the single agent treatments ( Fig. 1G and Figure A in S4 Fig.). Next, we determined if stem cell-like glioma cells that are known to be vigorously resistant to extrinsic apoptosis could be sensitized to TRAIL-mediated apoptosis. For this purpose, the primary neurosphere culture, GS9-6, was treated with vehicle, TRAIL, Olaparib and the combination of both for 24 hours. Similarly to the established GBM cells, the combination of Olaparib and TRAIL led to a significant increase in apoptosis induction as compared to the single agent treatments ( Fig. 1H and Figure B in S4 Fig.), suggesting that the combined treatment of Olaparib with recombinant human TRAIL not only affects the bulk of the tumor cells but more importantly targets the stem cell population of GBM cells, which according to the recent literature may be responsible for the rapid recurrence of these tumors. To elucidate the mechanism by which TRAIL/Olaparib elicit their effects on cellular viability, we hypothesized that it might involve enhanced activation of the apoptotic machinery. To test this, we conducted Western Blot analysis for activation (cleavage) of caspases in U87, U373, LN229 GBM cells and GS9-6 stem cell-like glioma cells in response to increasing dosages of TRAIL and Olaparib or the combination of both ( Fig. 2A-D). We found that in all cell lines tested the combination treatment of TRAIL and Olaparib led to an enhanced activation of initiator-(caspase-8/9) and effector caspase-3 ( Fig. 2A-D). and U373 (C) GBM cells were treated with TRAIL (ng/ml), Olaparib (10 mM) or the combination of both for 7 hours, subsequently harvested for immunoblotting and analyzed for the expression of cleaved caspase-8 (cCP8), full length caspase-9 (CP9) or effector caspase-3 (cCP3). In B the small fragment of cleaved caspase-8 is exposed longer (separate longer exposure (l. exp.)). Actin serves as a loading control. D) GBM neurosphere culture cells, GS9-6, were incubated with TRAIL, Olaparib or the combination of both reagents for 7 hours, subjected to immunoblotting and analyzed for full length caspase-3 (CP3) and cleaved caspase-3 (cCP3). Actin serves as a loading control. TR -TRAIL, Olap -Olaparib. doi:10.1371/journal.pone.0114583.g002 Olaparib elicits an up-regulation of TRAIL receptor 2 (DR5) accompanied by an increase in CCAAT enhancer binding protein (C/EBP) homology protein (CHOP)/GADD153 expression in GBM cells Under the assumption that Olaparib may cause a cellular stress response we hypothesized that Olaparib may up-regulate TRAIL receptor 2 (DR5), which is a bona-fide example of a protein that is known to be downstream of various stress responses, including endoplasmic reticulum stress and nuclear stress [15]. To confirm this hypothesis, U87 GBM cells were treated with Olaparib and a time course analysis of the expression of DR5 was conducted by Western Blotting. As early as three hours after treatment, an increase in expression of death receptor 5 (DR5) was appreciated, increasing further at 7 hours and culminating at 24 hours (Fig. 3A). Next, we studied the expression level of DR5 after treatment with increasing concentrations of Olaparib at 7 hours (Fig. 3B-D). U87, LN229 and U373 GBM cells revealed the strongest induction of DR5 between 5-10 mM Olaparib ( Fig. 3B-D). In addition, we also confirmed that triple-negative breast cancer cells (MDA-MB-468, MDA-MB-436) revealed an increase in DR5 expression after Olaparib treatment ( Fig. 3E and 3F), suggesting that the mechanism of DR5 up-regulation is not only applicable to GBM but also to other tumor entities. Furthermore, it is known that the stress response transcription factor CCAAT enhancer binding protein (C/EBP) homology protein (CHOP) is often involved in drug-mediated DR5 increase [15]. Therefore, we tested as to whether CHOP is upregulated after increasing concentrations of Olaparib. Olaparib caused a concentration-dependent increase of CHOP in U87, LN229 and U373 GBM cells that paralleled the increase of DR5 levels ( Fig. 3B-D), indicating that CHOP may be involved in DR5 modulation after Olaparib treatment. Highest levels of DR5 coincided with the strongest activation of initiator and effector caspases in the combination treatment of Olaparib and TRAIL A time course analysis of cells treated with the combination of TRAIL/Olaparib was conducted to explore whether the up-regulation of DR5 was associated with an increase in cleavage/activation of initiator/effector caspases. To this end, U87 and U373 GBM cells were exposed to the combination of TRAIL/Olaparib and subsequently analyzed for cleavage of caspase-8/-9/-3 and expression of DR5 ( Fig. 3G-H). We provided evidence that the TRAIL-resistant U87 and U373 cells revealed the strongest induction of DR5 along with the most pronounced activation of caspases ( Fig. 3G-H), supporting the hypothesis that DR5 is an instrumental factor for TRAIL/Olaparib-mediated cell death in high-grade gliomas. To further elucidate as to why Olaparib lowers the apoptotic threshold in GBM cells we analyzed the expression of pivotal key molecules that confer resistance to apoptosis. While XIAP did not change significantly, Bcl-2 and Survivin protein expression were mildly affected by Olaparib at 5 mM (Fig. 3I). At 10 mM of Olaparib no significant change was evident. XIAP and DR4 did not change significantly (Fig. 3I). These results reinforce the notion that DR5 is the key modulator of TRAIL/Olaparib-mediated apoptosis in this setting. Olaparib increases membranous DR5 expression in GBM cells Following their processing, death receptors are integrated into the plasma membrane, where they can interact with death ligands. Thus, we aimed to determine whether Olaparib also elevates the expression of DR5 in the plasma membrane (Fig. 3J). We observed that treatment with Olaparib induced an increase in the levels of DR5 (green), compared to DMSO treated cells (blue) and the respective antibody isotype control (red) (Fig. 3J). Olaparib elicits a nuclear stress response with up-regulation of CHOP in a time-dependent manner in GBM cells, and siRNAmediated suppression of CHOP attenuates TRAIL/Olaparibmediated increase of DR5 To determine the mechanistic properties of Olaparib we conducted a time course analysis for the appearance/up-regulation of molecules related to nuclear stress in U373 and LN229 GBM cells (Fig. 4A-B). Olaparib caused a cellular stress response in GBM cells, resulting in an up-regulation of CHOP, ph-Chk1, ph-p53 (Ser15) and ph-H2AX (Ser139) (Fig. 4A-B). Depending on the cell line, evidence for a nuclear stress response was observed as early as three hours after treatment with Olaparib ( Fig. 4A-B). The presence of an early nuclear stress response with an up-regulation of CHOP elicited by Olaparib suggested that Olaparib may modulate the expression of downstream factors related to the cellular stress response, such as DR5 which is upregulated after Olaparib treatment, see Fig. 3. CHOP up-regulation also paralleled the increase of DR5 (Fig. 3B-D). Therefore, we determined if silencing of CHOP may inhibit the Olaparib/TRAIL-mediated increase in DR5 protein levels. We employed a siRNA that specifically suppressed the expression of CHOP. U373 GBM cells were either transfected with a nontargeting siRNA or a CHOP-specific siRNA (Fig. 4C) and subsequently treated with the combination of TRAIL and Olaparib. Cells transfected with CHOPspecific siRNA oligonucleotides showed a decrease in the up-regulation of DR5 after treatment with TRAIL/Olaparib at 3, 7 and 24 hours (Fig. 4D). These results support the hypothesis that CHOP is implicated in the increase in DR5 protein levels mediated by TRAIL/Olaparib. Specific suppression of DR5 by siRNA mitigates TRAIL/Olaparibmediated apoptosis and activation of effector caspase-3 To test the hypothesis that DR5 is in fact a key molecule in TRAIL/Olaparibmediated cell death GBM cell lines were transfected with either non-targeting siRNA or DR5-specific siRNA. U373 and U87 cells that were transfected with DR5-specific siRNA revealed suppression of DR5 protein levels when compared to 139) and CHOP. C) U373 GBM cells were transfected with a non-targeting or a CHOP specific siRNA. 72 hours later, cells were harvested, subjected to immunoblotting and analyzed for CHOP expression. D) U373 GBM cells were transfected as indicated with either a non-targeting or a CHOP-specific siRNA. 72 hours later cells were treated with TRAIL/Olaparib and harvested for immunoblotting at the indicated time points. Thereafter, protein expression for DR5 was determined by immunoblotting. E) U373 GBM cells were transfected with a non-targeting or a DR5-specific siRNA. 72 hours after transfection cells were treated with the combination of TRAIL (50 ng/ml) and Olaparib (10 mM) for 7 hours, harvested for immunoblotting and analyzed for the expression of DR5 and cCP3. F) LN229 glioma cells were transfected with a non-targeting siRNA or a DR5-specific siRNA. 72 hours after transfection, cells were harvested for immunoblotting and DR5 expression was determined. G-H) LN229 cells transfected with a non-targeting (n.t.) or a DR5-specific siRNA were treated with TRAIL (200 ng/ml) and Olaparib (10 mM), stained with Annexin V/Propidium iodide and analyzed by flow cytometry. A p-value of less than 0.01 is indicated by two stars ''**''. Columns, mean; bars, SEM. TR -TRAIL, Olap -Olaparib. the non-targeting transfected controls ( Fig. 4E and Figure F in S2 Fig.). In addition, TRAIL/Olaparib-mediated increase in DR5 protein levels was potently attenuated by the DR5-specific siRNA (Fig. 4E). Furthermore, LN229 GBM cells transfected with DR5-specific siRNA were protected from TRAIL/Olaparibmediated cell death, corroborating the importance of DR5 in TRAIL/Olaparibmediated apoptosis (Fig. 4F-H). In addition, we observed that the proapoptotic effect of the combination therapy of TRAIL and PARP inhibitors is dependent on caspase-8 since silencing of this enzyme interferes with cell death induction ( Figure A-E in S2 Fig.). PARP-inhibition by PJ34 as well as specific suppression of PARP-1 overcomes TRAIL resistance in GBM cells To determine whether the sensitizing effect of Olaparib to TRAIL-mediated apoptosis is restricted to this PARP inhibitor, we extended our analysis to another PARP-inhibitor, PJ34, and also analyzed the effects of specific siRNA-mediated suppression of PARP-1 on TRAIL-mediated apoptosis in GBM cells. U87, U87-EGFRvIII and LN229 GBM cells were treated with TRAIL, PJ34 or the combination of both for 72 hours and analyzed for cellular viability (Fig. 5A). While the cooperative antiproliferative effect was most pronounced in U87 wildtype cells, U87-EGFRvIII and LN229 also revealed an enhanced antiproliferative effect when subjected to the combination treatment compared to treatment with each agent alone (Fig. 5A). We also determined as to whether this enhanced cell death by TRAIL and PJ34 is due to an increase in apoptosis. For that purpose, U87 GBM cells were treated with PJ34, TRAIL or the combination of both for 24 hours and subsequently the amount of specific apoptosis was determined by cell cycle analysis (Fig. 5B). We found that treatment with 20 and 40 mM of PJ34 significantly enhanced TRAIL-mediated cell death/apoptosis when compared to the single agent treatments (Fig. 5B,C). To exclude that the combination treatment of TRAIL and PJ34 requires wild-type p53, we also treated T98G cells with TRAIL, PJ34 and the combination of both reagents for 24 hours and found that in T98G, TRAIL and PJ34 cooperated to induce apoptosis ( Figure A in S3 Fig.). Consistent with the degree of cell death activation, (cleavage) of initiator caspase-9 was higher in the combination treatments consisting of TRAIL and PJ34 (20 and 40 mM) (Fig. 5D). Since Olaparib elucidated a marked increase in DR5 levels, we determined the protein levels of DR5 after treatment with PJ34 as well as in combination with the death ligand TRAIL (Fig. 5E). For this purpose, LN229 were treated with 20 mM PJ34 (singly) or in the presence of TRAIL (Fig. 5E). Both conditions led to an up-regulation of DR5 (Fig. 5E). However, only the combination treatment with TRAIL and PJ34 increased activation of caspase-9 and effector caspase-3 (Fig. 5E) -consistent with the enhanced cell death in the combination treatment. In concordance with these findings, combined treatment with TRAIL and PJ34 resulted in an enhanced cleavage of PARP in a dosedependent manner in U87 and T98G ( Figure C in S4 Fig.). Next, we sought to demonstrate that the enhancement of TRAIL-mediated apoptosis by both Olaparib and PJ34 was due to inhibition of PARP-1 and not related to an offtarget effect. U87 GBM cells were transfected with a specific siRNA, targeting PARP-1 (Fig. 5F). 72 hours after transfection knock-down of PARP-1 was confirmed by Western Blotting (Fig. 5F). Transfected U87 GBM cells were then treated with increasing concentrations of TRAIL and subjected to analysis for specific apoptosis by flow cytometry (Fig. 5G-H). U87 cells with silenced expression of PARP-1 were significantly more sensitive to the cytotoxic effects of TRAIL, which was accompanied by enhanced activation of effector caspase-3 ( Fig. 5F-H). The combination of PARP inhibitors with TRAIL is dependent on the proapoptotic protein BAX U87 cells were treated with TRAIL, PJ34 or the combination of both for 24 hours. Subsequently, cells were harvested, stained with JC-1 to determine the loss of mitochondrial membrane potential after the individual treatments ( Figure A-B in S5 Fig.). While PJ34 revealed minor changes in mitochondrial membrane potential, considerable changes were observed in cells treated with TRAIL alone. However, the combination treatment lead to an almost complete dissipation of mitochondrial membrane potential, further confirming the cooperative effects of TRAIL and PARP inhibitors with respect to cell death induction ( Figure A-B in S5 Fig.). As the JC-1 stain as well as the activation of caspase-9 suggested a potential involvement of the intrinsic apoptotic pathway, we silenced the expression of BAX, a proapoptotic member of the Bcl-2 family of proteins that is critically involved in the release of cytochrome-c into the cytosol upon intrinsic apoptotic stimulation. U87 cells were transfected with a non-targeting or BAX specific siRNA. 48 hours after transfection cells were treated with the combination of TRAIL and PJ34 for additional 7 hours. Subsequently, cells were harvested and analyzed for the expression of caspase-9, cleaved caspase-3 and Bax ( Figure C in S5 Fig.). In the presence of silenced BAX expression, cleavage (activation) of caspases induced by the combination therapy of TRAIL and PJ34 was attenuated, supporting the notion that this treatment regimen requires a mitochondrial amplification loop to maximize its cell death inducing properties. Moreover, U87 cells with silenced Bax expression were treated with the combination of TRAIL and PJ34 for 24 hours and revealed less apoptotic cells ( Figure D-E in S5 Fig.). the combination of both with the indicated concentrations for 7 hours and subjected to immunoblotting analysis for cleavage of caspase-9. E) LN229 GBM cells were treated with PJ34 (20 mM), TRAIL (200 ng/ml) or the combination of both and analyzed for the expression of DR5, caspase-9 (CP9) and cleaved caspase-3 (cCP3). The vertical line on the immunoblot indicates that the first and second samples were noncontiguous, but run on the same gel simultaneously with the other samples. F) U87 gliobastoma cells were transfected with non-targeting or PARP-1-specific siRNA for 72 hours and subsequently incubated with TRAIL (100 ng/ml). Protein expression of cCP3 and PARP-1 was evaluated by immunoblotting. G-H) U87 cells were transfected with either non-targeting siRNA or with a siRNA specific for PARP-1. 72 hours after transfection cells were incubated with increasing concentrations of TRAIL (concentrations in ng/ml) and subsequently analyzed for apoptosis by flow cytometry (specific apoptosis, sub-G1 fraction). Shown are both representative plots (G) as well as a quantitation of the indicated results (H). Columns, mean; bars, SEM. doi:10.1371/journal.pone.0114583.g005 TRAIL and PJ34 cooperate to reduce glioma growth in vivo and reveal minimal cytotoxicity in non-neoplastic astrocytes To verify that the combination treatment of PJ34 and TRAIL mainly affects tumor cells this treatment regimen was also tested in non-neoplastic cells. Remarkably, the combination treatment with TRAIL and PJ34 was shown to be non-toxic to normal human astrocytes and primary glial/neuronal cells, suggesting that this treatment will not only be effective against treatment resistant cancers, but also is expected to exert minimal side-effects (Fig. 6A-B). Next, we evaluated whether TRAIL/PJ34 is capable of reducing tumor growth in vivo. For that purpose, four different treatment groups (6 tumors in each group) were formed that received treatment with either vehicle, TRAIL, PJ34 or the combination of both as described in the methods section. While the control or single treatments with TRAIL or PJ34 demonstrated a significant growth pattern and increase in tumor size, the tumors in the combination treatment group were significantly smaller ( Fig. 6C-E). Discussion and Conclusion Limited therapeutic options are currently available for GBM and high-grade gliomas, which is due to several factors, such as the presence of the blood-brain barrier, the heterogeneity of these tumors [16] and their recalcitrant resistance to apoptosis [17]. Therefore, the quest for novel treatment strategies is on the rise [18,19]. In the present work, we have provided evidence that glioblastoma cells and tissue widely express PARP-1 and that interfering with PARP-1 might be a suitable strategy to overcome apoptotic resistance. In our experiments, PARP inhibition resulted in an inhibition of proliferation of GBM cells accompanied by a small increase in apoptotic cell death on its own. TRAIL is a cytokine that has been shown to potently induce apoptosis in neoplastic cells, while leaving most normal cells unaffected. In the last decade, TRAIL has also been studied in clinical trials with limited success thus far [20]. The reasons are multiple, but may include primary or secondary resistance or unfavorable pharmacokinetics [21]. Because TRAIL is a recombinant protein its pharmacokinetics are suboptimal and the molecule is prone to proteolytic degradation. Very recently, a novel molecule was discovered that mediates endogenous induction of TRAIL in tumor cells and thereby triggers their apoptotic suicide [22]. This compound was named TIC10 (TRAIL inducing compound 10) [22]. Having very favorable pharmacokinetics, this molecule crosses the blood brain barrier, induces apoptosis in malignant glioma cells/ cultures and cooperates with Bevacizumab to inhibit tumor growth in an orthotopic model of GBM [22]. Mechanistically, TIC10 induced TRAIL at the level of transcription and furthermore even facilitated the expression of DR5 [23] under certain circumstances, sensitizing the cells further to death receptormediated apoptosis by TRAIL. Another approach to activate extrinsic apoptosis is by utilizing death receptor-binding antibodies with intrinsic activity. Two representative examples out of this group are mapatumumab and lexatumumab, targeting either DR4 or DR5. These molecules have also reached clinical trials [24][25][26][27]. Despite the fact that the majority of GBM cells display resistance towards TRAIL, combination treatments were shown to dramatically enhance the killing efficacy of TRAIL. Given the strong up-regulation of DR5 by Olaparib and PJ34 (present study) it was tempting to speculate whether pharmacological or specific siRNA-mediated PARP-1 inhibition could enhance TRAIL-mediated apoptosis in GBM cells. Confirming this notion, pharmacological PARP inhibition sensitized both established GBM cells and low-passage ex vivo cultures to the cytotoxic effects of TRAIL in vitro as well as in a subcutaneous xenograft model of malignant glioma. In addition, siRNA-mediated specific knock-down of PARP-1 overcame TRAIL resistance in malignant glioma cells accompanied by enhanced activation of caspases. These results are consistent with previous studies showing other DNA-damaging compounds, e.g. Temozolomide, increase expression levels of DR5 and lower the threshold for TRAIL [28]. Specifically, Temozolomide and TRAIL have been combined in a preclinical orthotopic model of GBM and revealed synergistic killing effects in vivo. In this study, TRAIL was administered through convection-enhanced delivery (CED), an intratumoral treatment approach being pursued in the context of malignant gliomas. The clear advantages of CED are less systemic side-effects and that higher drug concentrations are achieved within the tumor. Regarding other TRAIL receptors, it appears that DR5 is the main agonistic death receptor in GBM, which is supported by the fact that most gliomas rely on DR5 signaling since a significant proportion of glioma specimens harbor a methylated DR4 promoter and in turn display low to absent mRNA and protein levels [29]. Along these lines, TRAIL-sensitizing reagents that increase the expression of death receptors appear to almost exclusively affect DR5 levels, while leaving DR4 almost unaffected in GBM cells [30]. This fact is also consistent with our present findings in which we do not find a significant change of DR4 levels in response to Olaparib treatment. We also found that PARPinhibitor mediated up-regulation of DR5 appears to be at least partially dependent on the stress response transcription factor, CCAAT enhancer binding protein (C/ EBP) homology protein (CHOP). However, we cannot exclude that potentially other factors are contributing to DR5 up-regulation, such as Erk [31], Sp1 [31], or ATF3 [32], which all three have been described to modulate DR5 levels in response to certain compounds. In addition, we also found that at longer time Fig. 6. The combination of TRAIL and PJ34 is non-toxic to human astrocytes and primary rat neurons and glial cells and exerts stronger antiproliferative activity against malignant glioma than the respective single treatments in vivo. A) Representative microphotographs of human astrocytes and primary glial/neurons cells treated with PJ34 for 72 hours. B) Human astrocytes and primary neurons/glial cells were treated with increasing concentrations of PJ34 or in combination with human TRAIL (huTRAIL) or murine TRAIL (muTRAIL) for 72 hours and then analyzed by MTT assay. C) Shown is the tumor growth curve with the four different treatment groups: Control (DMSO), TRAIL, PJ34, TRAIL/PJ34 (each n56 tumors). Tumors and treatment groups were established as described in the material and methods section. D) Quantification and statistical analysis of treatment groups after 23 and 26 days, respectively. The Mann-Whitney test was used for statistical analysis and a p-value of less than 0.05 was deemed statistically significant. E) Gross images of representative tumors from different treatment groups. T -TRAIL, P -PJ34. doi:10.1371/journal.pone.0114583.g006 points and higher concentrations of the PARP-inhibitor CHOP levels declined, which may be due to a feedback mechanism. Concerning other tumor types it was recently demonstrated that specific PARP-1 knock-down as well as treatment with the PARP inhibitor, PJ34, sensitized pancreatic cancer cells to TRAIL-mediated apoptosis in vitro and in vivo [33]. These results are generally in agreement with the results presented here, but in contrast Yuan et al. suggested a mechanism of sensitization to TRAIL by PARPinhibitors that did not involve an increase in TRAIL receptors. Despite death receptors, TRAIL resistance is determined by a number of other intracellular molecules. For instance, a certain proportion of gliomas reveal low expression of caspase-8 [9,34] which would be expected to attenuate death receptor-mediated apoptosis. In case of the drug combination of TRAIL with either Olaparib or PJ34 caspase-8 is an instrumental molecule for cell death since down-regulation of caspase-8 strongly suppresses apoptosis induced by the combination therapy. Thus, it appears that for the drug combination of TRAIL with PARP inhibitors the presence of a functional caspase-8 is a prerequisite. Furthermore, c-FLIP is an endogenous inhibitor of caspase-8 and has been associated with TRAIL-resistance in cancers. C-FLIP consists of at least three splicing variants, a more recently described c-FLIP (R) form, a long form (c-FLIP (L) and a short form (c-FLIP (S)) [35]. Each of which have roles in regulating extrinsic apoptosis. Depending on the tumor type and drug combination treatment either one of the forms appears to be more important in apoptosis inhibition. Pharmacological modulation of c-FLIP levels were achieved by several compounds including histone-deacetylase inhibitors, mitochondrial Hsp90 inhibitors [12], proteasomal inhibitors [36], flavonoids [37] and chemotherapeutics [38] among others. With respect to histone-deacetylase inhibitors, it was recently found that these drugs modulate c-FLIP levels at the level of transcription through c-myc, which suppresses c-FLIP transcription [39]. In the present study, PARP inhibitors did not modulate the caspase-8 levels. Other factors that modulate TRAIL-resistance are the Inhibitor of Apoptosis Proteins (IAPs) [40,41], XIAP and survivin and the anti-apoptotic Bcl-2 family of proteins, such as Bcl-2, Bcl-xL and Mcl-1. In this context, down-regulation of survivin by flavonoids has been shown to enhance TRAIL-mediated apoptosis in GBM cells. Similarly, inhibition of Bcl-xL and Bcl-2 by ABT-737 is known to drive death receptor-mediated apoptosis in malignant glioma and other tumor entities [42]. Elucidating additional off-target effects, ABT-737 was also shown to increase death receptor expression in cancer cells [43], thereby further facilitating its sensitizing effects for death ligands even at the level of the death-inducingsignaling-complex (DISC-complex). In summary, we have provided a framework for a novel treatment regimen for malignant glioma that mechanistically relies on the reactivation of extrinsic apoptotic cell signaling by induction of death receptor expression. This treatment is active in vivo and also effective against stem cell-like glioma cells, a specific cellular fraction of tumor cells that drive recurrence and treatment resistance. Supporting Information S1 Fig. Expression levels of PARP-1 in GBM cells and GBM tissue microarrays (TMAs). A) GBM tissue microarrays (TMAs), containing 34 tumor samples, were stained with an antibody against PARP-1. Representative micro photographs were taken from two GBMs and one representative sample of adjacent normal brain tissue. B) Cell lysates were prepared from three established GBM cell lines, U87, LN229, and U373 cells as well as from three neurosphere glioma cell cultures, NCH644, GS9-6 and NCH690. PARP-1 protein expression was analyzed by immunoblotting. One star ''*'' indicates short term exposure, whereas two stars ''**'' show a longer exposure for the same immunoblot of PARP-1. C) Chemical structure of the PARP inhibitor, Olaparib. D) LN229 GBM cells were treated with Olaparib (10 mM) for 72 hours and subjected to cell cycle analysis by flow cytometry. sG1 -sub G1 fraction (apoptotic cell fraction). E) U87, U87-EGFRvIII, LN229 GBM cells and GS9-6 GBM neurosphere culture were treated with increasing concentrations of the PARP inhibitor, Olaparib, and after 72 hours subjected to analysis of cellular viability by MTT assay. Values are provided as mean ¡ SEM of replicates of a representative experiment. doi:10.1371/journal.pone.0114583.s001 (TIF) S2 Fig. Inhibition of components of the DISC-complex interferes with engagement of apoptosis induced by TRAIL/PARP inhibitors. Requirements of TRAIL/Olaparib mediated cell death. A) U87 GBM cells were transfected with a non-targeting siRNA or a caspase-8-specific siRNA. 72 hours after transfection cells were treated with the combination of TRAIL (100 ng/ml) and Olaparib (10 mM) for 7 hours, harvested for immunoblotting and analyzed for expression of full length caspase-8 (FL-CP8) and cleaved caspase-3 (cCP3). B) U87 cells were transfected as in (A). Subsequently cells were treated with the combination of TRAIL (100 ng/ml) and Olaparib (10 mM) for 24 hours, harvested and analyzed for the amount of apoptotic cells (sub-G1 fraction) by flow cytometry. C) LN229 GBM cells were transfected with a non-targeting siRNA or a caspase-8-specific siRNA. 72 hours after transfection cells were treated with the combination of TRAIL (200 ng/ml) and Olaparib (10 mM) for 7 hours, harvested for immunoblotting and analyzed for expression of full length caspase-8 (FL-CP8) and cleaved caspase-3 (cCP3). D) LN229 cells were transfected as in (C). Subsequently cells were treated with TRAIL (200 ng/ml) and Olaparib (10 mM) for 24 hours, harvested and analyzed for the amount of apoptotic cells (sub-G1 fraction) by flow cytometry. E) U87 cells were transfected with a non-targeting or a caspase-8specific siRNA and subsequently treated with the combination of TRAIL and PJ34. Cells were analyzed for specific apoptosis and representative plots are provided. F) U87 cells were transfected with a DR5-specific siRNA for 48 hours, treated with the combination of TRAIL/Olaparib for 7 hours and analyzed for the expression of DR5 and cleavage of caspase-3 by immunoblotting. TR -TRAIL, Olap -Olaparib. doi:10.1371/journal.pone.0114583.s002 (TIF) -FL2-H). A) Representative histograms after staining for JC-1. B) Quantitative representation of the results for the JC-1 staining. C) U87 GBM cells were transfected with a non-targeting or a BAXspecific siRNA. 48 hours after transfection cells were subjected to treatment with TRAIL (100 ng/ml) and PJ34 (20 mM) for 7 hours and harvested for immunoblotting to determine protein levels of caspase-9 (CP9), cleaved caspase-3 (cCP3) and BAX. D-E) In addition, BAX siRNA transfected cells were treated as above for 24 hours. Following the incubation, cells were harvested for analysis for apoptosis by flow cytometry. Representative plots and a quantitation of the results are provided in D and E, respectively. Columns, mean; bars, SEM. TR -TRAIL. doi:10.1371/journal.pone.0114583.s005 (TIF) S1
2017-07-09T08:13:44.005Z
2014-12-22T00:00:00.000
{ "year": 2014, "sha1": "89eb74bd8f91d46d9b1c3d45dbd7f71eba9a64d8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0114583&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89eb74bd8f91d46d9b1c3d45dbd7f71eba9a64d8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
19019345
pes2o/s2orc
v3-fos-license
Probing strongly hybrid nuclear-electronic states in a model quantum ferromagnet We present direct local-probe evidence for strongly hybridized nuclear-electronic spin states of an Ising ferromagnet LiHoF$_4$ in a transverse magnetic field. The nuclear-electronic states are addressed via a magnetic resonance in the GHz frequency range using coplanar resonators and a vector network analyzer. The magnetic resonance spectrum is successfully traced over the entire field-temperature phase diagram, which is remarkably well reproduced by mean-field calculations. Our method can be directly applied to a broad class of materials containing rare-earth ions for probing the substantially mixed nature of the nuclear and electronic moments. The compound LiHoF 4 is widely regarded as a prototypical system realizing the transverse-field Ising model [1]. The groundstate in zero field is ferromagnetically ordered, while applying a relatively small transverse field induces a zero-temperature quantum phase transition at H c = 4.95 T into a quantum paramagnet [2], as shown in Fig. 1. Meanwhile, the hyperfine coupling strength of a Ho 3+ ion is exceptionally large with a coupling constant A = 39(1) mK [3,4]. The resulting strong hybridization between the electronic and nuclear magnetic moments [5] leads to two dramatic effects close to the quantum critical point: (i) significant modification of the low-temperature magnetic phase boundary (see Fig. 1) [2]; (ii) incomplete mode softening of the low energy electronic excitations at the critical point [6]. Therefore, this system provides a rare opportunity to explore the quantum phase transition of a magnet coupled to a nuclear spin bath [2,[6][7][8]. The impact of strong hybridization has also been highlighted for magnetic-ion diluted insulators, such as LiYF 4 :Ho 3+ using magnetic resonance [9,10]. A similar line of effort has achieved more recently single-molecule magnetic resonance with a rare-earth ion [11]. Furthermore, strong hybridization is of great interest in quantum information science [12][13][14]. As much as these examples focus on the single-ion limit, the other limiting case of many-body systems, such as LiHoF 4 , provides a very different and complementary perspective. While in the long-range-ordered state the hybridization is suppressed, an applied transverse field introduces quantum fluctuations enhancing the hybridization towards H c . However, probing directly the strongly hybridized states in LiHoF 4 using spectroscopic methods, at the lowest energy scale, has so far been restricted to the thermal paramagnetic phase in the single-ion limit. The involved energy scale is too low to be resolved by the neutron scattering [6,7]. Magnetic resonance on 165 Ho nuclei would provide a direct way of probing the hybridized nuclearelectronic states. However, the resonance in the ordered phase is expected around the frequency of 4.5 GHz in zero field, which does not fall into the operating frequencies of conventional nuclear magnetic resonance (NMR) nor electron spin resonance (ESR) instrumentation. Some [2]. Solid line represents a mean-field calculation following Ref. [7] taking into account strong hyperfine interaction, while dashed line is calculated without hyperfine interaction. Inset shows schematic energy levels for the Ising spins in the ordered phase (left) and its modification by hyperfine interactions with the nuclear spins (right). studies have reported a hyperfine structure in ESR [3,15], but all in the paramagnetic regime above the ordering temperature T c = 1.53 K [2]. To date, microscopic evidence for the realization of the unique nuclear-electronic Ising model [16,17] is absent. Here we demonstrate experimentally nuclear-electronic magnetic resonance in LiHoF 4 using coplanar microwave resonators and a vector network analyzer (VNA). We successfully trace the temperature and field evolution of the spectrum over the entire phase diagram, and show that it is remarkably well reproduced by a mean-field calculation with parameters set by independent spectroscopic measurements [3,4,7,8]. We begin with a description of our experimental setup shown in Fig. 2. A series of microwave coplanar resonators with different fundamental frequencies from 1.7 to 5.6 GHz were prepared. The impedance of the res- onator is matched to the rest of the system by optimizing the gap size between the conductors. The oscillating magnetic field, B(t), generated at the sample position is parallel to the surface. A cube shaped sample of 2 × 2 × 2 mm 3 was placed at the center of the active strip, with a sub-millimeter gap in-between to avoid unwanted heating. The measurement geometry was chosen such that the applied magnetic field, B 0 , is along the crystallographic b axis of the tetragonal Scheelite structure, and B(t) is perpendicular to both B 0 and the c axis to satisfy the magnetic resonance condition. We measured the S 11 parameter, which is defined as the ratio of the reflected to the input power, using a VNA which is connected through a low-loss cryogenic coaxial cable to the coplanar resonator. The coaxial cable was thermally anchored at each stage of the dilution refrigerator including the 1 K pot, Still, and mixing chamber to ensure thermalisation. The sample thermometer was located only 5 mm away from the sample which was thermally anchored to the mixing chamber. With an input power of -16 dBm applied by the network analyzer, the sample base temperature was 0.15 K to within 0.01 K. To guide and interpret our experimental investigation, we perform a model calculation using a mean-field approximation. The full Hamiltonian has been well characterized through a number of different experiments [3,7] and is given by, whereĴ i (J = 8) andÎ i (I = 7/2) are the electronic and nuclear angular momentum operators at site i, the dipolar coupling constant J D = n(g L µ B ) 2 = 13.5 mK, D αβ is the dimensionless coupling parameter for the dipole-dipole interaction [18], and the negligible exchange constant J ex = −1.2 mK. The nuclear Zeeman and quadrupole interactions are assumed to be negligible [9]. The crystal field interaction H CF with the surrounding ions splits the electronic states resulting in a groundstate which is a non-Kramers doublet with a strong Ising-like anisotropy and the first excited state 11 K above. In the ordered state, dipolar coupling lifts the groundstate degeneracy resulting in pseudo-spins up and down which we label as |↑ and |↓ states. Each state is further split into 8 nuclear-electronic states by the hyperfine interaction ( Fig. 1(a), inset). The total Hamiltonian can be diagonalized in the basis of (2J + 1) × (2I + 1) = 136 nuclear-electronic |α = |m J , m I states. The evolution of the lowest states with the applied transverse field is shown in Fig. 3(a). The energy level difference between consecutive states, ∆E, changes dramatically with the field as illustrated in Fig. 3(b). In the first approximation ∆E is proportional to A| J |, where | J | is the magnitude of the total angular momentum, hence ∆E decreases with the field and reaches a minimum at H c . The diagram shown in Fig. 3(b) allows us to predict at which field the magnetic resonance occurs for a given frequency. Experimentally we observed magnetic transitions between the adjacent nuclear-electronic levels through resonant absorption of continuous microwaves by the sample on a coplanar resonator. Figure 3(c) presents a typical frequency-field map at 0.3 K of the S 11 parameter using a resonator with the unloaded frequency of 3.4 GHz. The map shows a clear anomaly around 3.6 T indicative of magnetic resonance. This field value indeed agrees with the one predicted by mean-field calculations, which can be seen in Fig. 3(b) by taking an intersect of blue dashed line for 3.4 GHz with the solids lines for the energy level difference. For an in-depth comparison between experiments and calculations, we proceed to directly calculate the imaginary part of the frequency-dependent susceptibility χ ′′ (f ) which is responsible for magnetic resonance absorption [19,20]. The calculations were performed within the linear-response framework [18] using the mean-field wavefunctions |α and |α ′ , χ where E α is the energy of the hybridized nuclearelectronic eigenstates in the presence of the mean-field, n α = exp(−βE α )/Z is the thermal population factor and Z = α ′ exp(−βE α ′ ) is the partition function. The subscript y refers to the oscillating field direction. The lifetime in the linear-response calculation of the states is assumed to be independent of field and temperature, and was fixed to 40 ns, corresponding to a damping of Γ α ′ α = 0.17 GHz, which provided the best match to our data. The lifetime broadening may result from direct or indirect contributions from the electronic dipolar and exchange or nuclear dipolar couplings [20], which we leave for future study. We note that the contribution to susceptibility from electronic moments,Ĵ y , is 500 times larger than the contribution from nuclear momentsÎ y . Therefore, despite the predominantly nuclear-spin nature of the |↑ levels, the response we measure comes mainly from the electrons. This gives a tremendous enhancement of the signal from the nuclear states amplified by electronic moments. Figure 3(d) presents the calculated frequencyfield map of χ ′′ intensity at 0.3 K, which shows a drastic change upon approaching H c from below. Resonant absorption is expected from our calculations to be in the 2 to 4.5 GHz bandwidth. The absorptive part of the susceptibility is experimentally estimated as χ ′′ ∝ ∆(1/Q) [19], where the quality factor Q is defined as the loaded frequency divided by the full-width-half-maximum in the absorption profile in frequency as shown in the inset of Fig. 3(c). In Fig. 3(e) we show the experimental magnetic resonance spectra at 0.3 K for several different frequencies by plotting ∆(1/Q) = 1/Q−b, where b is a uniform background, which can be compared to the calculated spectra at 0.3 K in Fig. 3(f). Both calculations and measurements at frequencies of 3.4 and 3.9 GHz show resonant peaks around 3.6 and 3.0 T, respectively. Conversely, no resonance features are visible for the frequency of 1.7 GHz in both calculations and experiments. The predicted transitions between second-nearest neighbouring levels at 5.6 GHz is too weak to be observed experimentally. The calculated spectrum for 4.5 GHz appears as a broad hump at fields below 2 T, which can be expected from Fig. 3(d) where the frequency line cuts along the strongest χ ′′ intensity. For a better comparison the A value was slightly reduced by 3%, which is nearly within the uncertainty from the reported one [3]. In principle, the uncertainty in the crystal field parameters can influence our calculations [8]. Nevertheless, excellent agreement with the experiments is remarkable considering that the model is essentially parameter-free. Some minor discrepancies such as the fine structure in the 4.5 GHz experimental spectrum are likely due to fixed lifetime of all levels in our model. However, since the modes around 4.5 GHz lie very close in the relevant field range, that structure would depend critically on the tiny variations of parameters. We therefore consider it more prudent to use a constant damping. The high-field tails in 3.4 and 3.9 GHz spectra are possibly due to the neglected effects of fluctuations. Furthermore, we investigate the temperature evolution of the spectrum for 3.4 GHz from 0.15 to 2.5 K as shown in Fig. 4(a). At base temperature a resonance peak appears around 3.7 T, which on warming decreases in amplitude and shifts to lower fields. The former is due to redistribution of the thermal population of states at higher temperatures. The latter reflects the decreasing size of the ordered electronic moment with increasing temperature, sensed by the nuclei through the hyperfine interactions. In Fig. 4(b) we track the resonance field as a function of temperature. Our measurements are shown to be very sensitive to small variations of the hyperfine coupling as depicted by the bands. As shown in Fig. 3 and 4, all the salient features of the experimental results are well reproduced by the model calculations, thereby validating the transversefield nuclear-electronic Ising model [16,17]. The excellent description of the experimental results by our model implies that the probed states have a strongly hybridized character of both nuclear and electronic degrees of freedom. While this has been only hinted by previous bulk measurements [2] and neutron spectroscopy [6], here we show directly the transitions between the strongly hybridized nuclear-electronic states. Likewise, the presented magnetic resonance should be distinguished from conventional NMR and ESR where the electronic and nuclear moments are approximated to product states [19][20][21]. To highlight qualitative difference in the hybridized states between those in the many-body system and in the single-ion limit, we calculate the groundstate entanglement entropy [22,23] between the electronic and nuclear moments as a measure of the hybridization. We em- ploy the Schmidt decomposition of the mean-field wavefunction, |Ψ = n c n |m J ⊗ |m I , where c n ≥ 0 and n c 2 n = 1, where the entanglement entropy is given by the von Neumann entropy S = − n |c n | 2 ln |c n | 2 . The calculated entropy in the absence of dipolar interactions decreases smoothly with a transverse field (Fig. 5) in agreement with those reported by Ref. [17]. However, by turning on dipolar coupling the model produces a cusp-like peak at H c , that is, the hybridization in the ordered state of LiHoF 4 increases with the applied field until it reaches a peak at the critical point. The field essentially mixes the higher excited states into the groundstate, thereby enhancing the hybridization. Increasingly larger field, H > H c , magnetizes the electronic and nuclear moments along the field direction such that the groundstate approaches a product state. To summarize, we have demonstrated Ho nuclearelectronic magnetic resonance of LiHoF 4 in a transverse magnetic field over the entire field-temperature phase diagram. The spectral evolution is remarkably well reproduced by mean-field calculations, validating the transverse-field nuclear-electronic Ising model. Taking advantage of the well-characterized model nature of LiHoF 4 , we have successfully probed the strongly hybridized states and their evolution in the long-rangeordered state. Our experimental scheme will find direct applications not only in the LiRF 4 (R=rare earth) family [8,24,25], but also other R containing compounds including spin glass [16,17,[26][27][28] and spin ice [29,30]. We are grateful to M. Graf, S. S. Kim and P. Jorba Cabre for their contribution in building experimental setup at initial stage, B. Dalla Piazza for sharing his insight into the mean-field and linear-response theory. We also thank J. Jensen, A. Feofanov and D. Yoon for helpful discussions. M.J. is grateful to support by European Commission through Marie Sk lodowska-Curie Action COFUND (EPFL Fellows). This work was supported by the Swiss National Science Foundation, the MPBH network and European Research Council grant CONQUEST. I.K., P.B., and M.J. contributed equally to this work.
2016-12-30T15:02:37.000Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "f0090df91de54185deab1029ed9387348f5bef54", "oa_license": null, "oa_url": "https://infoscience.epfl.ch/record/225883/files/Kovacevic%20PhysRevB.94.214433.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f0090df91de54185deab1029ed9387348f5bef54", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270644941
pes2o/s2orc
v3-fos-license
Open-Cage Copper Complexes Modulate Coordination and Charge Transfer This study presents a novel copper-based redox shuttle that employs the PY5 pentadentate polypyridyl ligand in a dye-sensitized solar cell (DSSC). The [Cu(PY5)]2+ complex exhibits a unique five-coordinate square pyramidal geometry, characterized by a strategically labile axial position, to facilitate efficient dye regeneration while minimizing electron recombination, thereby enhancing DSSC performance. Notably, the inclusion of 4-tert-butylpyridine (TBP) as an additive is shown to significantly modulate the electrochemical and photophysical properties of the copper complexes, attributed to its coordination to the vacant axial site. This interaction leads to an improved open-circuit voltage and overall device efficiency, with the complexes achieving promising efficiencies under standard solar irradiance. The findings underscore the potential of utilizing copper-based redox shuttles with designed ligand geometries to overcome the limitations of current DSSC materials, opening new avenues for the design and optimization of solar energy conversion devices. This work not only contributes to the fundamental understanding of the behavior of copper complexes in DSSCs but also paves the way for future research aimed at exploiting the full potential of such geometrical and electronic configurations for the development of more robust and efficient solar energy solutions. ■ INTRODUCTION −5 The large overpotential required for efficient dye regeneration was a major limitation.Recent progress with outer-sphere redox shuttles has offered a solution to reduce the overpotential penalty and improve the DSSC performance.For example, copper-based redox shuttles have enabled 15.2% efficiency under standard solar irradiance and an astonishing 34.5% efficiency under indoor fluorescent lighting at 1000 lx intensity to be achieved. 6,7−15 One reason for the slow recombination is coordination of exogenous Lewis base additives to the electrolyte, including 4-tert-butylpyridine (TBP), to the Cu(II) species. 16,17−21 TBP has been shown to coordinate to an open position of the Cu(II) species and sometimes substitute polydentate ligands completely. 16,17Lewis bases have long been employed as electrolyte additives in DSSCs to increase the performance of the devices by shifting in the titania conduction band edge to a more negative potential and blocking recombination by adsorbing to the titania surface, 22,23 but have a more significant impact on electrolyte and overall device performance with copper redox shuttles. Recent developments in copper-based redox shuttle design by Sun and colleagues include use of pentadentate ligands to inhibit coordination and ligand substitution by the exogeneous bases. 24For example, the pentadentate Cu(II) complex, [Cu(tpe)] 2+/+ (tpe = N-benzyl-N,N′,N′-tris(pyridin-2ylmethyl)ethylenediamine), was shown to have resistance to substitution, even when exposed to TBP.This resistance to ligand substitution is attributed to two factors.First, the increased denticity of the ligand translates to a large stability constant of the metal complex owing to the chelating effect.Second, they specifically designed the coordination sphere's steric constraints to shield the copper complexes�particularly their oxidized forms, from TBP coordination. In a similar vein, we recently reported a Cu complex featuring a hexadentate ligand, bpyPY4 (6,6′-bis(1,1-di-(pyridine-2-yl)ethyl)-2,2′-bipyridine), to improve the stability of Cu(II) complex via the chelate effect. 25We found that the bpyPY4 ligand provided a dynamic coordination environment, where a 5-coordinate Cu(II) complex is formed and the noncoordinated pyridyl moiety blocks TBP coordination.In this work, we build upon the family of five-coordinate Cu(II) complexes as redox shuttles by utilization of the pentadentate ligand, 2,6-bis[1,1-bis(2-pyridyl)ethyl]pyridine (PY5), with copper metal centers.A unique feature of this geometrically constrained ligand is that it should leave an open coordination site for an exogenous base but form a stable complex via the chelation effect.Synthesis, characterization, and analysis of the behavior of these interesting copper complexes in DSSC are presented below. ■ RESULTS AND DISCUSSION The synthesis of the PY5 ligand was previously reported and the method reproduced here. 26The copper complexes were synthesized by reacting equimolar ratios of the PY5 ligand with copper precursors leading to the formation of [Cu(PY5)]OTf, where OTf is trifluoromethanesulfonate, and [Cu(PY5)]OTf 2 as described in the Experimental Methodssection.The complexes were purified via recrystallization from acetonitrile (ACN) for Cu(I) complexes and dichloromethane (DCM) for Cu(II) complexes and characterized by 1 H NMR spectroscopy and elemental analysis.Single crystals were also isolated, and X-ray diffraction revealed the solid-state structures of all complexes, as shown in Figure 1.Select bond lengths and bond angles of the structures depicted in Figure 1 are provided in Tables 1 and 2. The [Cu(PY5)]OTf complex is four-coordinate, with one of the pyridine arms in the PY5 ligand turned away from the Cu(I) center, but the steric ligands prevent formation of the 27 where a τ4 of 1 represents an ideal tetrahedral geometry and 0 represents an ideal square planar geometry. The [Cu(PY5)]OTf complex exhibited a calculated τ4 value of 0.65 which is indicative of a seesaw structure. 27The constrained nature of the ligand leads to a significant portion of the Cu(I) center being solvent-exposed.The choice of solvent during synthesis thus plays an important role in controlling disproportionation.When the complex was synthesized in DCM, Cu(I) disproportionated to form copper metal and a Cu(II) complex.Interestingly when an NMR was taken of the disproportionated Cu(II) product, it did not match the synthetic [Cu(PY5)] 2+ spectrum.Therefore, we employed ACN as the solvent of choice because the Cu(I) complex did not undergo disproportionation, likely due to the stabilization of the open site by the coordinating solvent.The solid-state structure of [Cu(PY5)]OTf 2 shows a pseudo-octahedral geometry, with the PY5 ligand coordinated at five sites and ACN, the solvent used for synthesis, coordinated at the sixth site.The average bond length for the PY5 ligand is 2.081 Å, and the bond length between the copper and nitrogen on the ACN is 2.369 Å.In an attempt to see how the system changes when there was a vacant axial site, the crystals were regrown in the presence of a noncoordinating solvent, DCM, which revealed an interaction between one of the oxygens on the OTf counterion and the copper center in the axial position with an apparent bond length of 2.453 Å. Challenges arose in purifying ACN-bound Cu(II) complexes due to the apparent lability of ACN, leading to a mixture of copper complexes with and without ACN-bound.These observations all suggest a weakly bound, labile coordination to the open coordination site on Cu(II) complexes. To assess the structural nuances of the copper complex, 1 H NMR spectroscopy was employed.The 1 H NMR spectra for the [Cu(PY5)]OTf complex, shown in Figure S1, were recorded in deuterated ACN at room temperature.The spectrum displays an integration for the 25 protons.Four peaks representing the pyridine "arms" are observed: one at approximately 8.5 ppm and three others between 7.35 and 7.95 ppm, with each integrating to four protons.Peaks at around 8.00 ppm, integrated to one proton, and 7.25 ppm, integrated to two protons, correspond to the central pyridine.The peak at approximately 2.20 ppm, integrating to six protons, is attributed to the ligand's methyl groups.The Cu(II) complex is a paramagnetic compound making it hard to determine accurate quantitative information from the 1 H NMR spectra shown in Figure S1.By investigating the [Cu(PY5)]-OTf 2 complex upon the addition of TBP, shown in Figure S5, it can be seen that as TBP is added to the system, there is no indication of any unbound PY5 in solution; however, there is a change in the shift of the peaks corresponding to the copper complex.This indicates the PY5 ligand is not displaced, but a reaction occurs.Furthermore, broad peaks grow into the spectra at the expected values for the TBP ligand.The broadness of the TBP peaks could be due to a rapid exchange on the NMR time scale as the TBP complexes are binding and releasing from the copper center.The interaction that is being seen is most likely due to TBP displacing the labile ACN or OTf to form [Cu(PY5)(TBP)] 2+ . The optical spectra of the copper complexes in ACN display peaks below 450 nm, which were attributed to π−π* absorptions from pyridine units.Metal-to-ligand charge transfer bands are observed between 450 and 500 nm for [Cu(PY5)]OTf, as shown in Figure S6.Two absorption peaks were also observed for the [Cu(PY5)]OTf 2 complex, assigned to d−d transitions, at 597 nm (16,750 cm −1 ), with an extinction coefficient of 79.8 M −1 cm −1 , and 909 nm (11,001 cm −1 ), with an extinction coefficient of 13.2 M −1 cm −1 , as shown in Figure S7.While four transitions are expected for a d 9 complex with C 2v symmetry, our observation of two d−d transitions is consistent with a tetragonally distorted octahedral or square pyramidal geometry with two, presumably higher energy, transitions not resolved here.The assignment is consistent with the crystal structure, where a loosely bound axial ACN or OTf is observed in the solid state and likely unbound and solvated in solution.We note that Stack and coworkers reported a strikingly similar single d−d transition for [Cu(PY5)(Cl)] + at 623 nm with an extinction coefficient of 80 M −1 cm −1 . 28In this case, Cl occupies an equatorial position with an open axial site, with one of the pyridine arms in the PY5 ligand turned away from the Cu(II) center, to form a fivecoordinate Cu(II) complex with a square pyramidal geometry.Anderson and co-worker recently reported another structurally similar Cu(II) complex with the pentadentate 2,6-(bis(bis-2-Nmethylimidazolyl)phosphino)pyridine ligand, whose absorption spectrum is also similar to that observed here, with two apparent d−d transitions at approximately 600 and 900 nm (maxima and extinction coefficients not reported). 29The very similar d−d transitions observed for the structurally similar but different equatorial ligand environments, is surprising. When TBP is added to the [Cu(PY5)]OTf 2 solution, a noticeable blue shift of 23 nm (671 cm −1 ) for the peak at ∼600 nm, which also comes with an almost 50% increase in absorbance, and 58 nm (749 cm −1 ) for the peak at ∼900 nm, with minimal change in absorbance, are shown in Figure 2.This change in the spectra confirms that there is a reaction occurring between the [Cu(PY5)]OTf 2 and TBP.Equilibrium is reached with 10 equiv of TBP relative to the Cu(II) in solution, evidenced by the absorption peaks not changing with additional aliquots of TBP.We hypothesize the reaction is TBP coordinating to the open sixth coordination site on the Cu(II) center, or displacing the weakly bound ACN/OTf.We have previously reported the displacement of bidentate ligands from Cu(II) centers by TBP to form [Cu(TBP) 4 ] 2+ . 16The spectrum of [CuPY5] 2+ titrated with TBP does not match the spectrum of [Cu(TBP) 4 ] 2+ , shown in Figure 2 for comparison, indicating that this is not the product of titration and the PY5 ligand is still bound to the Cu(II) center, consistent with NMR spectroscopy results above.The difference spectra between the parent [Cu(PY5)]OTf 2 complex and the titrated solutions show isosbestic points at 655, 735, and 880 nm indicating the spectra are composed of two absorbing species, as shown in Figure S8.The apparent blue shift of the spectra upon titration of TBP is consistent with the formation of a new complex assigned as [Cu(PY5)TBP] 2+ .Cyclic voltammetry measurements were performed to determine the electrochemical potential of the parent complex and assess the effect of TPB on the redox behavior, as shown in Figures S8 and S9.In the [Cu(PY5)]OTf 2 complex, two redox waves were observed: one at −0.372 V vs Fc + /Fc and another smaller wave at −0.662 V vs Fc + /Fc.When the complex was measured in the absence of ACN, using DCM as the solvent, the wave at −0.662 V vs Fc + /Fc was not observed but appeared when ACN was titrated into the solution.This indicates that the wave at −0.662 V vs Fc + /Fc corresponds to the ACN-bound complex, and the peak at −0.372 V vs Fc + /Fc is either the OTf bound complex or a 5-coordinate [Cu(PY5)] 2+ complex.In order to test these possibilities, the copper complex was synthesized with bistrifilmide (TFSI) as the counterion, which is noncoordinating, and measured in anhydrous ACN.A single wave was observed at −0.382 V vs Fc + /Fc, which shows this wave cannot be due to bound OTf, and we thus assign it to the 5-coordinate [Cu(PY5)] 2+ complex.To further test the possibility that OTf is bound to the copper center in solution, a variable-temperature 19 F NMR spectroscopy experiment was conducted, which involved cooling the solution from room temperature to −40 °C.Only a single peak is observed, whereas two peaks are expected if one OTf is bound and one is in the outer coordination sphere.Upon lowering the temperature, the fluorine peak corresponding to the triflate counterion exhibits a decrease in intensity coupled with an increase in sharpness; see Figures S10−S11.This observation suggests that the counterion does not undergo exchange with the axial site on the copper center, and the counterion does not interact with the copper center in solution.Such an exchange would typically result in the broadening of the peak as the temperature decreases.Notably, this trend remained consistent irrespective of whether ACN or DCM was used as the solvent.When this evidence is compiled with the previous results of the ultraviolet−visible (UV−vis) and cyclic voltammetry (CV), it becomes clear that the solution geometry of the [Cu(PY5)] 2+ complex is square pyramidal. Upon addition of TBP to the solutions containing [Cu(PY5)]OTf 2 or [Cu(PY5)]TFSI 2 , the redox waves dissipate and a new wave grows at −0.512 V vs Fc + /Fc.This new wave continues to grow as TBP is added until ca. 10 equiv relative to the amount of Cu(II) in the solution is reached, where it is the only redox wave observed and is constant, as shown in Figure 3.This is the same end point that can be determined from the absorption spectra, demonstrating that they both derive from forming the same complex in solution, which has gone to completion and is assigned to coordinated TBP to the open site.Attempts to isolate the TPB complex were unsuccessful, however.The anodic wave is very broad, with a poorly defined peak.The detailed reason for this unusual waveform is still not clear but likely due to the coordination of TBP coupled with oxidation of the Cu(I) species.Thus, 10 equiv of TBP were added to the electrolyte in all cells investigated in this paper, where there should be negligible mixtures of coordination complexes, as described below. The solution potentials were determined with open-circuit potential measurements by using a Pt wire.The solution potential is similar between the OTf and TFSI versions of the complex, −0.372 and −0.398 V vs Fc + /Fc, respectively, which are both slightly negative of the predicted Nernstian potentials of −0.357 and −0.365 V vs Fc + /Fc, respectively.This indicates that the predominant redox shuttle that is affecting the devices is the one represented by the wave at ca. −0.375 V vs Fc + /Fc, with minimal contribution from the wave at −0.662 V vs Fc + / Fc.When TBP is added to the devices, the solution potential shifts negatively by ca.130 mV to −0.486 and −0.501 V vs Fc + /Fc for the OTf and TFSI complexes, respectively, which matches well with the predicted −0.503 V vs Fc + /Fc.While the change of counterion has some effect on the redox properties of the complex, the performance of the DSSC devices was unaffected by the counterion chosen, so the [Cu(PY5)]OTf 2 was used for further studies. The cross-exchange electron transfer rates between [Cu-(PY5)]OTf 2 and octamethylferrocene (Me 8 Fc) were measured via stopped-flow spectroscopy using methods previously reported and described in the Supporting Information. 25,30he self-exchange rate constant for Me 8 Fc +/0 was previously determined to be 2.0 (±0.4) × 10 7 M −1 s −1 from NMR line broadening measurements. 30Thus, calculation of the selfexchange rate for the [Cu(PY5)]OTf 1/2 couple could be determined from the Marcus cross-exchange formalism, 31 and anhydrous ACN with 0.1 M TBAPF 6 using a glassy carbon working electrode (blue).CVs with the addition of 1 equiv of TBP (pale green), 5 equiv of TBP (green), 10 equiv of TBP (deep green), and 15 equiv of TBP (dark green) are also shown.was found to be 88.1 (±7.3)M −1 s −1 .This relatively slow selfexchange rate constant is attributed to the large inner-sphere reorganization energy of approximately 0.74 eV due to the change in geometry and coordination number upon electron transfer.This self-exchange rate constant is about an order of magnitude faster, with an ∼0.25 eV lower inner-sphere reorganization energy, compared to other related cobalt and copper cage complexes that undergo a change in coordination number upon electron transfer which we attribute to the more strained geometry of the ligand preventing larger structural changes in the backbone. 25,30Thus, [Cu(PY5)] + should be a better dye regenerator and result in high current densities. The behavior of the [Cu(PY5)] 2+/+ complexes in DSSCs and the effects of TBP were therefore investigated by fabricating devices with various concentrations of TBP.Scheme 1 depicts the energy level diagram of the DSSCs, including the TiO 2 conduction band, 32 Y123 dye, 33 and the Cu(PY5) electrolyte.Figure 4a shows the current density vs applied voltage (J−V) curves for the best devices measured for each condition.The compiled results of all devices are provided in the Supporting Information.The performance of all devices improved with the addition of TBP; however, the effect is relatively small and primarily due to increases in opencircuit photovoltage (V OC ).The steady-state short-circuit photocurrent density, J SC , is around 9 mAcm −2 for all devices.The incident photon-to-current efficiency (IPCE) spectra were also measured, as shown in Figure 4b.Integration of the IPCE yields a predicted J SC under white light.The predicted current is in general agreement with the J SC determined from J−V curves; however, the IPCE and integrated J indicate a trend of increasing photon conversion with increasing TBP concentrations, which is not observed under white light in the J−V results.Thus, the intrinsic kinetics giving rise to photocurrent generation improve with TPB; however, this is another process that limits the photocurrent under white light conditions.These increases in IPCE are attributed to a decreased level of recombination to the Cu(II) form of the redox shuttle as TBP is added to the solution, which improves the charge collection efficiency.This is a well-known effect of TBP which results from steric blocking of recombination from surface adsorbed TBP. 22TBP also increases electrolyte viscosity, which will result in mass-transport limitations of the photocurrent, which explains the essentially constant photocurrent under white light, i.e., it is limited by mass transport, not intrinsic kinetics. Interestingly, the V OC improves by ca. 100 mV with the addition of TBP despite a ca.130 mV negative shift in the solution potential.Since the V OC is the difference in solution potential and Fermi level (E F ) in the TiO 2 at open circuit, this result implies that the E F increases by ca.230 mV with TBP.TBP is known to raise the conduction band energy and block recombination, both of which can result in a higher E F .The partitioning of these effects is challenging; however, decreased recombination with TBP is consistent with the increased IPCE which we take as the dominant effect.We note that increases in the conduction band increase the driving force, and thus the rate, of recombination, and is thus likely a minor or negligible contribution to increased V OC . ■ CONCLUSIONS Our research introduces a novel copper-based redox shuttle for dye-sensitized solar cells, utilizing a pentadentate polypyridyl ligand with a labile axial position to form [Cu(PY5)] 2+/+ complexes with a unique five-coordinate square pyramidal geometry.This design not only stabilizes the complex but also enables precise interactions with additives like TBP, modulating the device's electrochemical properties without displacing the PY5 ligand.These strategic interactions enhance DSSC efficiencies, may increase compatibility and optimization with new sensitizers being developed, 34 and offer insights into optimizing redox shuttle design for improved solar energy conversion, setting a foundation for future advancements in robust and efficient solar technologies. The starting material, 1,1-bis(2-pyridyl)ethane, and the PY5 ligand were synthesized according to published procedures. 35Cu(PY5)](OTf).A mixture of PY5 (74.5 mg, 0.168 mmol) and [Cu(ACN) 4 ](OTf) (57.5 mg, 0.153 mmol) in anhydrous ACN was stirred for 30 min at room temperature.The solution was precipitated with anhydrous diethyl ether, forming a yellow solid, and the solid was collected.The solid was dried under vacuum.(97.8 [Cu(PY5)](TFSI).Copper bistrifilmide was made in situ by combining silver bistriflimide (47.0 mg, 0.121 mmol) and copper chloride (12.0 mg, 0.121 mmol) in minimal anhydrous ACN.The solution was stirred for 30 min at room temperature.After mixing, the solid silver chloride was removed via filtration, and then the copper bistrifilmide solution was added to PY5 (52.5 mg, 0.118 mmol) and stirred overnight.The solution was precipitated with anhydrous diethyl ether, forming a yellow solid, and the solid was collected.The solid was dried under vacuum (90.2 mg, 96.9% yield). 1 Characterization.All NMR spectra were recorded on an Agilent DirectDrive2 500 MHz spectrometer at room temperature and referenced to residual solvent signals.All NMR spectra were evaluated by using the MestReNova software package features.Cyclic voltammograms were obtained using μAutolabIII potentiostat using BASi glassy carbon electrode, a platinum mesh counter electrode, and a fabricated 0.01 M AgNO 3 , 0.1 M TBAPF 6 in ACN Ag/AgNO 3 reference electrode.All measurements were internally referenced to an Fc + /Fc couple via the addition of ferrocene to solution after measurements or run in a parallel solution of the same solvent/ electrolyte.UV−vis spectra were taken using a PerkinElmer Lambda 35 UV−vis spectrometer using a 1 cm path length quartz cuvette at 480 nm/min.Elemental analysis data were obtained via Midwest Microlab.For single-crystal X-ray diffraction, single crystals were mounted on a nylon loop with paratone oil using a Bruker APEX-II CCD diffractometer.Crystals were maintained at T 1/4 173(2) K during data collection.Using Olex2, the structures were solved with the ShelXS structure solution program using the direct methods solution method.Photoelectrochemical measurements were performed with a potentiostat (Autolab PGSTAT 128N) in combination with a xenon arc lamp.An AM 1.5 solar filter was used to simulate sunlight at 100 mW cm −2 , and the light intensity was calibrated with a certified reference cell system (Oriel Reference Solar Cell & Meter).A black mask with an open area of 0.07 cm −2 was applied on top of the cell active area.A monochromator (Horiba Jobin Yvon MicroHR) attached to the 450 W xenon arc light source was used for monochromatic light for IPCE measurements.The photon flux of the light incident on the samples was measured with a laser power meter (Nova II Ophir).IPCE measurements were made at 20 nm intervals between 400 and 700 nm at short-circuit current. Device Fabrication.TEC 15 FTO was cut into 1.5 cm by 2 cm pieces which were sonicated in soapy DI water for 15 min, followed by manual scrubbing of the FTO with Kimwipes.The FTO pieces were then sonicated in DI water for 10 min, rinsed with acetone, and sonicated in isopropanol for 10 min.The FTO pieces were dried in room air and then immersed in an aqueous 40 mM solution TiCl 4 solution for 60 min at 70 °C.The water used for the TiCl 4 treatment was preheated to 70 °C prior to adding 2 M TiCl 4 to the water.The 40 mM solution was immediately poured onto the samples and placed in a 70 °C oven for the 60 min deposition.The FTO pieces were immediately rinsed with 18 MΩ water followed by isopropanol and were annealed by heating from room temperature to 500 °C, holding at 500 °C for 30 min.A 0.36 cm 2 area was doctor-bladed with commercial 30 nm TiO 2 nanoparticle paste (DSL 30NRD).The transparent films were left to rest for 10 min and were then placed in a 125 °C oven for 30 min.The samples were annealed in an oven that was ramped to 325 °C for 5 min, 375 °C for 5 min, 450 °C for 5 min, and 500 °C for 15 min.The 30 nm nanoparticle film thickness was 8.2 μm.After cooling to room temperature, a second TiCl 4 treatment was performed as described above.When the anodes had cooled to 80 °C, they were soaked in a dye solution of 0.1 mM Y123 in 1:1 ACN/tertbutyl alcohol for 18 h.After the anodes were soaked, they were rinsed with ACN and dried gently under a stream of nitrogen. The PEDOT counter electrodes were prepared by electropolymerization in a solution of 0.01 M EDOT and 0.1 M LiClO 4 in 0.1 M SDS in 18 MΩ water.A constant current of 8.3 mA for 250 s was applied to a 54 cm 2 piece of TEC 8 FTO with predrilled holes using an equal-sized piece of FTO as the counter electrode.The PEDOT electrodes were then washed with DI water and ACN before being dried under a gentle stream of nitrogen and cut into 1.5 × 1.0 cm 2 pieces.The working and counter electrodes were sandwiched together with 25 μm Surlyn films by placing them on a 140 °C hot plate and applying pressure.The cells were then filled in a nitrogenfilled glovebox with electrolyte through one of the two predrilled holes and were sealed with 25 μm Surlyn backed by a glass coverslip and applied heat to seal with a soldering iron.The electrolyte consisted of 0.10 M Cu(I), 0.05 M Cu(II), 0.1 M Li(Counterion), and 0.5 M 4-tert-butylpyridine in ACN.Contact to the TiO 2 electrode was made by soldering a thin layer of indium wire onto the FTO. * sı Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.inorgchem.4c01046.NMR spectra of ligand and complexes reported herein, results of electrochemical and spectroscopic measurements, self-exchange measurements, and device performance metrics (PDF)
2024-06-22T15:47:56.606Z
2024-06-19T00:00:00.000
{ "year": 2024, "sha1": "624824519cd0b910bc636c26ad4c526847ea508a", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.inorgchem.4c01046", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "86c9e2f0d0b389f9a5c62a90545f684952111dd4", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
233235238
pes2o/s2orc
v3-fos-license
Cell-type-specific profiling of loaded miRNAs from Caenorhabditis elegans reveals spatial and temporal flexibility in Argonaute loading Multicellularity has coincided with the evolution of microRNAs (miRNAs), small regulatory RNAs that are integrated into cellular differentiation and homeostatic gene-regulatory networks. However, the regulatory mechanisms underpinning miRNA activity have remained largely obscured because of the precise, and thus difficult to access, cellular contexts under which they operate. To resolve these, we have generated a genome-wide map of active miRNAs in Caenorhabditis elegans by revealing cell-type-specific patterns of miRNAs loaded into Argonaute (AGO) silencing complexes. Epitope-labelled AGO proteins were selectively expressed and immunoprecipitated from three distinct tissue types and associated miRNAs sequenced. In addition to providing information on biological function, we define adaptable miRNA:AGO interactions with single-cell-type and AGO-specific resolution. We demonstrate spatial and temporal dynamicism, flexibility of miRNA loading, and suggest miRNA regulatory mechanisms via AGO selectivity in different tissues and during ageing. Additionally, we resolve widespread changes in AGO-regulated gene expression by analysing translatomes specifically in neurons. T he development of complex organisms and their adaptation to the surrounding environment requires the implementation of precise gene expression networks. These networks must be tightly controlled and tuned both during time (temporally) and in a cell-type specific manner (spatially). While the core gene expression network is set by RNA polymerase II (Pol II)-driven transcription and dictated by complex transcription factor cohorts, the modulation and fine-tuning of these networks is often achieved post-transcriptionally. MicroRNAs (miRNAs) are highly conserved modulators of post-transcriptional gene expression. These 21-24 nt class of small RNAs are encoded within longer non-coding RNAs, that form extended foldback structures known as pri-miRNAs and are transcribed by Pol II. The foldback structure is processed by Drosha and DGCR8 (Pasha in nematodes) into a "pre-miRNA", which is subsequently exported to the cytoplasm 1,2 . Here, Dicer proteins process the pre-miRNA into a mature miRNA duplex [3][4][5] . Once processed, mature miRNAs are loaded into Argonaute (AGO) proteins, which together constitutes the core of the RNAinduced silencing complex (RISC), whereupon they bind with imperfect base-pair complementary to their target mRNA to elicit their regulatory role. This pairing typically occurs in the 3′ untranslated region of protein coding mRNAs with nucleotides 2-7 at the 5′ end (known as the seed region) of the singlestranded guide miRNA, allowing the direct repression of the target mRNA. This silencing effect is elicited as translational repression, which is often coupled with transcript decay or via direct endonucleolytic cleavage (slicing) catalyzed by AGO itself 6,7 . The sequence specificity of any RNA silencing reaction is conferred by the guide miRNA, but owing to the flexibility in targets granted by the six-nucleotide seed region, single miRNAs can potentially target hundreds of mRNAs 8 . However, molecular evidence of the bulk of these interactions is still largely lacking, with the discovery of miRNAs far outpacing their assignment to targets and cellular functions. MiRNA-mediated post-transcriptional regulation generates a more complex topology of gene expression from that produced solely from nuclear events, enabling developmental complexity, flexibility, and robustness. MiRNAs are involved at all levels of development from the early stages of embryogenesis to the final distinguishing molecular events that precise terminal differentiation [9][10][11] . This is exemplified most strikingly in C. elegans, where a single miRNA has been found to direct the terminal fate decisions of otherwise identical pairs of neurons, segregating and defining each neuron with a distinct identity, physiology, and function 9,12 . Aside from specifying multicellularity, miRNAs play an essential role in maintaining cellular homeostasis and can act to rapidly, and often reversibly, adjust regulatory networks in response to or in spite of environmental fluctuations [13][14][15] . In addition to miRNA expression, processing, and stability, their interaction with effector AGO proteins is the most important step in defining their activity and thus their ultimate functions. There are 25 different AGOs in C. elegans, but only three appear to be dedicated exclusively to miRNA pathways. The first two, Argonaute-Like Gene 1 (ALG-1) and ALG-2 are widely expressed and share high sequence similarity (81% at the amino acid level) 16,17 . On the contrary, ALG-5 expression is restricted to germ cells, associating with a subset of germline-enriched miR-NAs and being required for normal reproductive development 18 . In many species, similar scenarios exist whereby multiple AGOs are available for miRNA loading within the same somatic cells. However, it remains unclear whether highly individualized miRNA:AGO silencing complexes form within certain cell types, and whether this customizes miRNA function within a particular cellular context. Although genome-wide views of temporal miRNA expression have been a core facet of metazoan research, genome-wide spatial profiling involving fluorescence activated cell sorting (FACS), microdissection, immunoprecipitation, or novel enzymatic techniques [19][20][21][22][23] fall short of integrating the activities of miRNAs at cell-type resolution. This information would not only aid in our ability to assign biological functions to miRNAs, but also reveal relationships between AGO proteins and miRNAs that may represent new layers of miRNA regulation. Here, we focussed on miRNAs loaded into silencing complexes by immunoprecipitating the two main somatic AGOs known to bind miRNAs in C. elegans, ALG-1, and ALG-2. By expressing tagged versions of these AGOs under cell-type-specific promoters, we generated a genome-wide map of loaded miRNAs across three major tissue types. We identified a large portion of miRNAs with strong associations to either ALG-1 or ALG-2 in the intestine, body wall muscles, or nervous system. Most miRNAs exhibited a highly cell-type-specific loading pattern, with many individual miRNAs also demonstrating AGO-specific preferences within particular cell types. Due to the sensitivity of the technique, we discovered not only multiple novel miRNAs, but also a rich array of miRNA isoforms that exhibited cell-and AGO-specific loading patterns. Finally, we demonstrated at the molecular level that AGOs act with both spatial and temporal loading specificity, and that ALG-2, which we found to modulate widespread translatome changes in the nervous system, can act in a surrogate capacity when the function of ALG-1 is reduced or compromised both genetically or physiologically during ageing. Results Establishing a system to spatially profile loaded miRNAs. To gain insight into the spatial function of miRNAs at a genomewide level, we generated a cell-type specific-map of miRNAs bound to their effector AGO proteins. To achieve this goal, we constructed a series of transgenic C. elegans strains in which either ALG-1 or ALG-2 was labeled with an N-terminal hemagglutinin (HA) epitope tag and expressed exclusively in select cell types (Fig. 1a). The HA::alg-1 and HA::alg-2 transgenes were placed under the control of promoters driving expression in three major somatic tissue types, the intestine (ges-1p), the body wall muscle (BWM, myo-3p), and the nervous system (rgef-1p) (Fig. 1a). Mos1-mediated single-copy insertion (mosSCI) 24 was used to target each transgene to a specific integration site on chromosome IV, thereby providing stable and comparable expression. We confirmed correct cell-type-specific expression patterns in live animals by visualizing green fluorescent protein (GFP) signals within nuclei using an SL2::his-58::gfp cassette (splice leader 2, histone tagged with GFP) incorporated into each HA::alg-1/2 construct (Fig. 1b). Immunoblotting revealed that each of the HA ALG-1 and HA ALG-2 tissue-specific fusion proteins was expressed and migrated at the expected size, and that it could be successfully purified from populations of whole animals using immunoprecipitation (Fig. 1c). Differences in ALG protein levels between cell types largely reflected the strength of the tissue-specific promoters used to control their expression. Intratissue differences between HA ALG-1 and HA ALG-2 protein levels were most likely attributable to variations in protein stability between the two AGOs, given that the expression criteria (5′ and 3′ cis-regulatory sequences, transgene copy number and insertion loci) are identical between them. Indeed, AGO stability positively correlates with miRNA loading 25 , suggesting that a greater proportion of cellular miRNAs associated with ALG-1, a notion supported by the sequencing results below. Although this or other forms of protein turnover may be occurring, neither a priori should affect the ability to normalize our data and thereby accurately profile loaded miRNAs. Next, we investigated whether immunoprecipitation of HA ALG-1 and HA ALG-2 from homogenized populations of each of the transgenic strains could reveal stably associated miRNAs in a cell-specific and AGO-specific manner. To this end, we first examined their association with two canonical miRNAs, miR-2 and lin-4, by RNA gel blotting (Fig. 1d). In input samples, we noted no obvious change in abundance of either of these miRNAs between wild-type and transgenic populations, suggesting that the AGO transgenes did not affect native miRNA levels. When compared to immunoprecipitations performed on wild-type animals lacking any HA-labeled AGOs, we detected an enrichment of both miRNAs in a range of context-dependent associations with ALG-1 and ALG-2. Specifically, we found that miR-2 stably associated equally with both ALG-1 and ALG-2 but was predominantly enriched with AGOs purified from the BWM and intestine (Fig. 1d). lin-4 was more evenly enriched across tissue types but showed preferential binding for ALG-2 in neurons (Fig. 1d). These results suggest first, that the epitopelabeled AGOs were functional in their ability to bind mature miRNAs, and second, that we were able to resolve cell-type and AGO-type differences in these associations that hinted at the complexity of miRNA activity. The primary principle underlying our approach is its ability to purify miRNAs from specific cell or tissue types for direct comparison. During homogenization, it is conceivable that miRNAs released from one tissue type could non-specifically interact with AGOs from another tissue type and undermine the analysis of cell-type-specific miRNA:AGO complexes. To confirm that only genuine, in vivo assembled miRNA:AGO complexes were immunoprecipitated, we crossed the muscle-specific HA ALG-1 and HA ALG-2 lines (myo-3p:: HA ALG-1 and myo-3p:: HA ALG-2) to a genetic background lacking the miR-1 miRNA [mir-1(n4101)]. miR-1 regulates retrograde signaling at neuromuscular junctions and is expressed exclusively in several muscle lineages, including BWM 26 . Consistent with this notion, we found that miR-1 was strongly enriched in immunoprecipitates of both HA ALG-1 and HA ALG-2 expressed in BWM in wild-type animals, but was undetectable in mir-1(n4101) backgrounds (Fig. 1e). Demonstrating that miR-1:AGO complexes did not assemble post-homogenization and that our approach represents an accurate in vivo capture of the cellular context of miRNA: AGO interaction, we found that supplementation of exogenous miR-1 into mir-1(n4101); HA alg-1/-2 lysates did not result in the immunoprecipitation of miR-1:: HA ALG-1 or miR-1:: HA ALG-2 complexes ( Fig. 1e and Supplementary Fig. 1). A map of cell-type specific, loaded C. elegans miRNAs. Having validated our ability to accurately profile cell-type-specific miRNA:AGO interactions, we next performed small RNA sequencing of the immunoprecipitated samples to generate a genome-wide view of loaded miRNAs. Deep-sequencing yielded at least 10 million reads per library (two biological replicates for each sample), including that of non-transgenic wild-type samples that had undergone an identical immunoprecipitation protocol. The wild-type non-transgenic populations were analysed in order to provide a baseline level of background miRNAs, that were nonspecifically enriched during immunoprecipitation of AGO complexes, to which all other samples could be compared. Overall, we detected 95 miRNAs (over one third of the total known C. elegans miRNAs) that were significantly associated (log 2 fold change > 2, P < 0.05) with either ALG-1 or ALG-2 in the intestine, BWM, or nervous system ( Fig. 2a and Supplementary Data 1). Indicatory of a high level of cell-type-specific miRNA function, the vast majority of miRNAs exclusively loaded into AGOs in a single cell-type (Fig. 2b). Intestinal cells contained the highest number of these miRNAs (33), with neurons and muscle cells containing near identical numbers (23 and 22, respectively). The sharing of miRNAs between two tissues was a more common feature between intestine and muscle (7), and neurons and muscles (5), with only one miRNA in common between the intestine and nervous system (Fig. 2b). Overall we found that more miRNAs loaded into ALG-1 than ALG-2, although both ALGs displayed high levels of cell-type-specific miRNA loading, albeit in different proportions (Fig. 2c, d). To assess the sensitivity of our approach, we chose to focus on neuron-specific miRNAs, which are often expressed in only a few cells within the total population of 300 neurons in each animal 27 . miR-791 is expressed exclusively in three pairs of carbon dioxide sensory neurons (BAG, AFD, and ASE; Fig. 2e), acting to specifically silence target mRNAs that would otherwise disrupt normal neuronal physiology 28 . We were readily able to detect a strong association of miR-791 with both ALG-1 and ALG-2 in neurons but not in any other tissue type (Fig. 2e), demonstrating the ability of our approach to accurately uncover miRNA:AGO interactions occurring in just six cells of the whole animal. Impressively, the lsy-6 miRNA, which during development is restricted to only a single neuron (ASEL) in the whole animal ( Fig. 2f), where it directs left-right neuronal asymmetry 12 , was also efficiently detected in neurons where it associated more strongly with ALG-2 than ALG-1 (Fig. 2f). Three other neuronal miRNAs (miR-790, miR-793, and miR-1821), which also display highly specialized cell-type-specific expression patterns within the ASE sensory neuron pair 19,28 , were also strongly detected in association with neuronal AGOs (Supplementary Fig. 2). Together, these results suggest that our technique is both sensitive and accurate at single-cell resolution in whole animals. miRNAs display preferential, flexible, and temporally dynamic AGO loading. When we focussed on AGO associations amongst the total pool of enriched miRNAs, a strong preference was observed for miRNA:ALG-1 specific interactions (44) over miRNA:ALG-2 specific interactions (10), with 41 miRNAs displaying overlapping association with both AGOs (Fig. 3a). Within individual tissues, this trend held true for the intestine and BWM, but varied dramatically in the nervous system. In neurons, individual miRNAs were evenly distributed in their propensity to load exclusively into either ALG-1 or ALG-2 ( Fig. 3a), suggesting that ALG-2 plays a more significant role in miRNA-mediated repression in the nervous system than in either of the other major tissues studied here. Although these results demonstrated a preference for AGO-type-specific miRNA interactions within individual cell types, given the high homology, lack of nucleotide preference for miRNA loading, and shared subcellular localization of ALG-1 and ALG-2 16,17,29 , we questioned whether miR-NAs could re-load between AGO proteins to provide regulatory flexibility under cellular situations in which AGO availability was altered. Using CRISPR-Cas9, we engineered endogenous ALG-1 with an N-terminal 3xFLAG::GFP tag in an alg-2(ok304) null background. miR-71, which is associated with both ALG-1 and ALG-2 in most major tissue types, was 8-fold more enriched with ALG-1 in alg-2(ok304) mutants than in wild-type animals (Fig. 3b), suggesting a surrogate role for ALG-1, which increased in abundance in the absence of alg-2 ( Supplementary Fig. 3). To determine whether ALG-1 could compensate for ALG-2 in individual cell-types, we focussed on those miRNAs whose loading was restricted to a specific tissue. The intestine-enriched miR-83, neuron-specific miR-791, and muscle-specific miR-1 all displayed greater association with ALG-1 in alg-2(ok304) backgrounds ( Fig. 3c-e), suggesting that the surrogate role of ALG-1 was not tissue-dependent. Indeed, this general trend held true for other tissue-specific miRNAs ( Supplementary Fig. 4). In the reciprocal experiment, where we used an endogenously labeled 3xFLAG::RFP::ALG-2 strain 16 harboring a deletion in alg-1 (gk214), we observed a converse effect whereby ALG-2 compensated for the lack of ALG-1 by associating more strongly with multiple miRNAs (Fig. 3b-e and Supplementary Fig. 4), even though the abundance of ALG-2 was not increased by a loss of alg-1 ( Supplementary Fig. 3). Under certain physiological contexts, AGOs may change in cellular abundance, resulting in potential switches in miRNA pathway regulation. Consistent with previous findings 16 , we found that ALG-1 protein levels decreased during the onset of adulthood and continued to decline during ageing, with ALG-2 levels remaining relatively stable over time (Fig. 3f). Although the molecular determinants of this downregulation are unknown, probing miRNA:AGO interactions over time revealed that between AGO types serves a biological function during ageing remains to be determined. However, alg-1 has been shown to promote longevity whereas alg-2 restricts longevity in an insulin/ IGF-1 signaling-dependent manner 16 , suggesting that their functions can be antagonistic later in life. Taken together, these results suggest that individual miRNAs preferentially associate with specific AGOs under particular Reads per million cellular contexts. However, the system maintains flexibility when the AGO abundance or cellular context changes, indicating their capacity to act as mutual surrogates for each other's activities. In support of this idea, we found that alg-2 could functionally compensate for the removal of alg-1 under certain contexts. For example, developmental rates under multiple growth conditions (fed and temporarily starved) were reduced in alg-1(gk214) mutants when compared to their wild-type counterparts ( Fig. 3k and Supplementary Fig. 6). This defect could be partially rescued by the overexpression of alg-2 under the ubiquitous eft-3 promoter (Fig. 3k). Indeed, selective overexpression of alg-2 in the intestine (ges-1p::alg-2) had the equivalent rescuing ability to overexpression of alg-1 in the intestine (ges-1p::alg-1), demonstrating that the AGO proteins were functionally interchangeable in this tissue during development (Fig. 3k). Comparison of miRNA expression, abundance, and AGO loading. MiRNA loci fall under the same Pol II transcriptional control as protein coding genes, but how miRNA activity is regulated post-transcriptionally, particularly during the assembly of effector miRNA:protein complexes, remains relatively unexplored, especially from a spatial perspective. We compared patterns of expression, abundance, and assembly with AGOs to resolve potential points of post-transcriptional regulation of individual miRNAs in single cell-types. By analysing transcriptional reporters of miRNA promoters (with the assumption that they faithfully recapitulate endogenous expression), as well as specific examples of mature miRNA abundance obtained through Hen1 cell-type-specific profiling 19 , we found spatial correlations between all three levels of miRNA regulation, as observed for the sensory neurons described above (Fig. 2e, f, and Supplementary Data 2). For example, miR-75 was associated with both ALG-1 and ALG-2 exclusively in the intestine, which mirrored the expression pattern of a mir-75p::GFP transgene (Fig. 4a) as well as the reported enrichment of mature miR-75 19 . Indeed, multiple examples of intestinal miRNAs (miR-77, miR-238 and miR-243) matched this profile ( Supplementary Fig. 7). The same was also true for the largely neuron-specific miR-90 (Fig. 4b) and the muscle-specific miR-67 (Fig. 4c). Despite a frequently tight correlation, we also identified specific examples whereby the expression pattern of a miRNA and/or its cellular abundance diverged spatially from its association with AGOs and therefore potential activity. The vast majority of miR-239a, for instance, was found to be loaded into intestinal ALG-1 (Fig. 4d), despite a mir-239ap::GFP expression pattern indicating predominantly neuronal expression in the head (Fig. 4d) and mature miR-239a enrichment in neurons, intestine, and muscle 19,30 . Likewise, miR-83 was significantly enriched within the intestine in both ALG-1 and ALG-2, but was expressed in neurons and the intestine (Fig. 4e), with mature miR-83 being reported in intestine, neurons, and body wall muscle 19 . Although examples of post-transcriptional pathways that influence pri-miRNA or pre-miRNA processing, or stability between different tissue types and/or developmental states have been identified 31,32 , our results suggest that the formation of miRNA:AGO effector complexes can in some cases be uncoupled from the abundance of mature miRNAs within a cell, and may therefore represent a previously unappreciated control point for the regulation and segregation of miRNA activities between distinct cellular lineages. Detection of miRNA isoforms (isomiRs) and their differential AGO loading profiles to reference miRNAs. Within our dataset of total miRNAs,~17% of reads were distinguishable from miRNA with reference sequences (Fig. 5a, b and Supplementary Data 3). These often consisted of single nucleotide substitutions or nucleotide shifts at either the 3′ or 5′ end of the sequence. Such miRNA isoforms, termed isomiRs, can be derived as a result of RNA editing 33,34 , the activities of terminal-nucleotide transferases and 3′-exonucleases [35][36][37] , or imprecise or multiple cleavage events mediated by Drosha or Dicer during miRNA biogenesis. These "isomiRs" are often discarded as misprocessed artifacts and as such their biological roles remain unclear. Indeed, it is yet to be determined whether their function deviates from that of the reference miRNAs usually produced by the loci, even though the sequence alterations may influence target recognition and miRNA processing and stability. The highly significant read counts obtained in our AGO-immunoprecipitation samples suggested that the isomiRs revealed here were genuine and biologically relevant (Fig. 5a, b). Consistent with this concept, we found that many isomiRs displayed cell-type and AGO-type specificity that indicated function ( Supplementary Fig. 8). In most instances, these patterns correlated with those of the reference miRNAs, supporting the notion that isomiRs act cooperatively with these miRNAs to target common biological pathways 38 . However, we also identified isomiRs (e.g., those derived from miR-71) with spatially divergent loading patterns, suggesting that they had distinct biological functions to their reference miRNA counterparts ( Supplementary Fig. 9). Moreover, a fraction of isomiRs harbored nucleotide changes within their seed sequence, which could have a profound effect on target recognition and therefore biological function (Supplementary Fig. 9 and Supplementary Data 4). Together, these results suggest that isomiRs could greatly diversify the functionality of individual miRNAs through alterations in cellular loading patterns and seed sequences. Furthermore, the sensitivity of detection combined with the cell-typespecific resolution of our approach also enabled the discovery of multiple new candidate miRNA loci (Supplmentary Note 1). Cell-type-specific miRNA loading can help predict biological function. The identification of new miRNAs in virtually every species has far outpaced their assignment to biological roles, creating a void between discovery and function. In C. elegans, knockouts of individual miRNAs often have no obvious phenotype 39 , and it is only in sensitized (e.g., alg-1 mutant) backgrounds that some miRNA-dependent phenotypes are Fig. 3 Preferential, flexible and temporally dynamic AGO loading. a Proportion of miRNAs loaded into HA ALG-1 or HA ALG-2 in combined (left-total) or separate tissue types (right). b Quantitative RT-PCR of miR-71 loading into 3×FLAG::GFP::ALG-1 (left) or 3×FLAG::RFP::ALG-2 (right) in the indicated genetic backgrounds. Each dataset represents the ratio of IP normalized to input. c-e The same as shown in b for the intestine-specific miR-83 (c), neuronal-specific mir-791 (d), and muscle-specific miR-1 (e). f Western blot analysis of 3×FLAG::GFP::ALG-1 (left) or 3×FLAG::RFP::ALG-2 (right) at L4, 2-day-old adult (2 DOA) and 7-day-old adult (7 DOA). Tubulin represents loading control. Western blots for ALG levels during aging were repeated at least three times with similar results. g Quantitative RT-PCR of miR-71 loading into 3×FLAG::RFP::ALG-2 at the developmental stage indicated. Each dataset represents IP values normalized to input. h-j The same as shown in g for intestine-specific miR-83 (h), neuronal-specific mir-90 (i), and muscle-specific miR-1 (j). Error bars for every column graph represent +/− s.e.m. of three biological replicate immunoprecipitations. k Body size quantification of the indicated genotypes after recovery from starvation. Error bars represent +/− s.e.m. P values calculated using one-way ANOVA with Tukey's multiple comparisons test. n ≥ 27 biologically independent animals for each strain tested. 40 . Having developed a cell-type-specific map of miRNA: AGO interactions, we predicted that individual miRNA functions could be derived by focussing on the cells in which they were loaded. In support of this idea, miR-1, which was expressed and loaded into AGOs in muscle cells ( Supplementary Fig. 10a), has been shown to play a key role in retrograde signaling at neuromuscular junctions 26 . Likewise, miR-234, which was expressed and AGO-loaded exclusively in the nervous system (Supplementary Fig. 10b), was recently demonstrated to regulate genes involved in neuropeptide release 41 . The miRNAs miR-75 and miR-60 were associated with ALG-1 and ALG-2 exclusively in the intestine (Figs. 4a and 6a). To test whether these miRNAs were involved in intestinal-related functions, we studied them in the context of starvation, given that the primary function of the intestine is to absorb and process food-derived nutrients. Interestingly, we found that body fat content was significantly reduced in mir-60(n947) and mir-75 (n4472) mutants ( Fig. 6b and Supplementary Fig. 11a). Moreover, a temporary starvation period incurred during early development ( Supplementary Fig. 6a) exacerbated developmetal rate defects in mir-60(n947) mutants and revealed that miR-60, but not miR-75, was required for full recovery to normal body size in adulthood (Fig. 6c, d and Supplementary Fig. 6b and 11b). This deficiency in the mir-60(n947) null mutant could be fully rescued, when we selectively expressed mir-60 in the intestine (elt-2p::mir-60) as a single copy insertion (Fig. 6d), suggesting that miR-60 operates in the gut to promote recovery following periods of low food availability in early life, possibly through fat storage regulation. Moreover, these results suggest that the miRNA loading map generated here provides functionally relevant activity-based associations of miRNAs with AGOs. Interestingly, we also observed recovery from starvation, albeit partially, when we expressed mir-60 exclusively in either the BWM (myo-3p::mir-60) or nervous system (rgef-1p::mir-60) of mir-60(n947) animals (Fig. 6d). Because a mir-60p::gfp reporter is expressed only in the intestine ( Fig. 6a; Martinez et al. 42 ), the same tissue type in which miR-60 exclusively interacted with ALG-1 and ALG-2, our results suggest that miR-60 could either act in other cell types, where it is not normally active to promote recovery from starvation, or spread to the intestine from distal tissues via a cell nonautonomous action similar to that recently reported for miR-83 43 . ALG-2 is required in the nervous system for widespread changes in the translatome. In addition to providing information on miRNA function, our results are also useful in revealing the biological activities of individual AGOs. Our data indicate that an increased proportion of miRNAs are associated with ALG-2 in neurons when compared with intestinal cells and muscle cells (Figs. 2c and 3a). Indeed, while we observed that the expression of GFP::ALG-1 was largely ubiquitous, RFP::ALG-2 was enriched in the nervous system of late larval and adult stages ( Supplementary Fig. 12), consistent with previous findings 16,17 . Global misregulation of miRNA target regulation has previously been observed in animals deficient in alg-1 but not alg-2, suggesting that ALG-1 serves as the primary AGO for the miRNA pathway during development, with the contribution of ALG-2 being mostly redundant 44,45 . However, because our results suggested that ALG-2 may play an important regulatory role in neurons, we investigated the impact on the neuronal translatome in animals lacking alg-2. We adopted a polysome immunoprecipitation approach (Fig. 7a) in which we engineered strains expressing a FLAG epitope-tagged version of RPL-18 (a component of the small subunit of 80S ribosomes and polysomes) under the control of the neuron-specific promoter rgef-1p (Fig. 7a). MosSCI was used to generate single-copy insertions of the transgene and we confirmed pan-neuronal expression of the construct by following SL2::his-58::GFP fluorescence (Fig. 7a). After immunoprecipitation from populations of whole animals, mRNAs actively translating exclusively on neuronal polysomes were purified and sequenced. Differential analysis comparisons between wild-type and alg-2(ok304) backgrounds revealed that 171 polysomeassociated transcripts were upregulated, whereas 180 were downregulated (Fig. 7b, c). The vast majority of these had known neuronal expression and functions (Supplementary Data 6). Gene ontology (GO) term analysis indicated a broad range of functional categories of both upregulated and downregulated transcripts ( Fig. 7c and Supplementary Data 6) suggesting that ALG-2, via the miRNA interactions we established in neurons, markedly influences neural gene-regulatory networks in a wide range of cellular functions. Discussion The development of an embryo into a complex multicellular organism requires the setting of specific transcriptional networks that not only direct the development of, but also maintain the correct identity and function of specific cell-types under a multitude of environmental influences. MiRNA-mediated repression is a central mechanism of gene regulation, that can direct and more often fine-tune these networks at the post-transcriptional level. Typically, genome-wide temporal miRNA expression profiles are achieved at the whole organism level, with spatial information then incorporated on a case-by-case basis through detailed expression studies of individual miRNA reporters that are driven by putative promoter sequences. However, these approaches are laborious and assume that (i) the cis-regulatory sequences and transgenic methods adopted recapitulate endogenous expression, and (ii) miRNA expression correlates precisely with loading and thus potential function. Posttranscriptional regulation of miRNA biogenesis, stability, and AGO loading, represent points at which expression and activity may diverge 31,32 . One recent approach to overcome many of these issues was the use of cell-specific expression of the Arabidopsis methyltransferase Hen-1 to methylate, and therefore chemoselectively protect and facilitate cloning of miRNAs for highthroughput sequencing 19 . This highly sensitive approach yielded valuable insights into the cell-type-specific abundance of mature miRNAs in C. elegans, but was not able to define those that constitute a silencing complex and are therefore presumed to be active. Physical isolation of cells by either laser dissection or fluorescence activated cell sorting followed by miRNA sequencing 20,23,46,47 is also limited in its inability to resolve miRNA loading under non-invasive conditions. In addition, although AGO pulldown experiments and subsequent analyses of miRNA associations have been performed at a whole-animal level [16][17][18]29 , subtle and likely biologically relevant interactions have been missed because of the lack of cellular resolution afforded by these approaches. To obtain a more robust molecular understanding of miRNA biological function and regulation, we systematically probed cellspecific interactions between miRNAs and their AGO effector proteins, which constitute the central component of the terminal silencing complex. By implementing and expanding upon approaches that we previously used in Arabidopsis root cell layers 48 , we have combined AGO immunoprecipitiation with cell-type specificity to reveal miRNA loading in a range of cell and tissue types from whole animals. In addition to validating the robustness of this approach against intertissue contamination, we found that it was also highly sensitive and could detect distinct miRNA:AGO interactions originating from a single cell within the whole animal. Despite this, there are several potential caveat's associated with a miRNA:AGO immunoprecipitation based approach to address cell-type specific miRNA loading patterns. The inherent background that we observed (i.e., detection of miRNAs in wild type animals upon immunoprecipitation), while addressed in this current work, could provide a barrier to the true potential of the technique. Additionally, although stable miRNA: AGO associations are likely to represent an active or poised miRNA state, it is possible that loaded miRNAs may themselves be subject to further regulation before initiating target recognition and repression. For example, the seed nucleotides of the guide strand are not readily accessible without considerable conformational changes within the Piwi-Argonaute-Zwille (PAZ) domain of AGO proteins 49 . It also remains a possibility that the AGO proteins, which initially load miRNAs in their duplex form, do not automatically return to a conformational ground state that promotes expulsion of the non-guide strand (miRNA*) to form the mature silencing complex 50,51 . However, we observed a strong bias for guide strand loading in our datasets, indicating that the majority of miRNA:AGO complexes detected were based on single-stranded and therefore actively loaded miRNAs. As such, and to our knowledge, this represents the most precise and direct map of cell-type-specific miRNA loading at a genome-wide scale in animals. Indeed, the resolution provided by cell-type-specific analyses is exemplified by the discovery of hundreds of isomiRs and 37 new candidate miRNAs, as well as their spatial and AGOloading patterns. This essentially reveals that even in one of the most well-characterized models of miRNAs 52,53 , a multitude of hidden and rich layers of miRNA biology remain to be elucidated. Importantly, our results demonstrate that AGO binding itself can be uncoupled from the cellular abundance of mature miR-NAs, representing experimental evidence of a layer of miRNA regulation that can spatially segregate activity. Indeed, studies have shown that the availability of AGOs is limited, and that at any time in a cell there is a several-fold excess of unbound miRNAs relative to miRNA:AGO silencing complexes [54][55][56] . Comparisons between tissue-specific mature miRNA abundance 19 and tissue-specific miRNA:AGO loading (Supplemental Data 2) suggest that this is also true in nematodes. Moreover, our results reveal preferential interactions between specific miRNAs and either of the two AGO types within the same cell type at the same time (Supplemental Data 2), suggesting another dimension of miRNA regulation directed by AGO selection that was previously too subtle to appreciate in whole animal studies. Interestingly, in other species, such as Drosophila and potentially mammals, the identity of the AGO protein associated with a miRNA can dictate the mechanism that will lead to mRNA repression 57 . This is also true in C. elegans where the assembly of different miRNA RISCs (miRISCs) in the germline and soma affect targeted mRNAs distinctively 58 . In particular, somatic ALG-1 has been shown to regulate both mRNA stability and/or translation whereas ALG-2 appears to act more exclusively through translational repression 58 . Although ALG-1 and ALG-2 are highly similar in sequence and structure, they appear to form distinct protein complexes in vivo 29 . For example, ALG-1, but not ALG-2, can interact with the receptor for activated C-kinase (RACK1), which mediates miRISC recruitment to polysomes, the Fig. 7 Specific effects of alg-2 on the neuronal "translatome". a Schematic representation of cell-type-specific polysome immunoprecipitation setup. Single copy FLAG-tagged RPL-18 protein driven by the neuron-specific rgef-1 promoter allows purification of neuron-specific translating mRNAs from either wild-type or alg-2 mutant backgrounds. Scale bars, 50 µm. b Scatterplot showing expression levels (log 2 CPM, counts per million) of mRNAs immunoprecipitated with polysomes from neurons in alg-2 mutant compared to wild-type worms. Red points represent significantly differentially upregulated or downregulated genes with a FDR value of <0.01. Fold change and significance was calculated by fitting a two-sided negative binomial model to processed read counts, according to edgeR pipeline for pairwise comparisons between multiple groups. c Broad gene ontology categorization of mRNAs upregulated (left) or downregulated (right) in neurons of alg-2(ok304). 59,60 . This suggests that distinct miRNAs, via their preferential interaction with either ALG-1 or ALG-2, can direct distinct mRNA repression mechanisms within a specific cellular context. However, this system still appears to be intrinsically flexible if, for instance, one of the AGO proteins is either not expressed, degraded or downregulated. These distinctions also appear to be dynamic over time and could underlie biological purpose. For instance, the gradual decline in ALG-1 protein levels during ageing and concomitant increase in miRNA:ALG-2 associations, suggest a possible biological function for this exchange across lifelong physiological transitions. This could be purposely coupled with the primarily translational inhibitory function of ALG-2 58 to afford fast, yet reversible posttranscriptional gene regulation during ageing where longer-term changes in gene expression, such as those invoked by developmental cues, are no longer needed. Indeed, it is suspected that modes of post-transcriptional regulation used by miRNAs in early embryogenesis differ from those used in late embryogenesis in a range of species [61][62][63] . The apparent functionality of AGO selection is also revealed when making comparisons between somatic tissue types. In neurons, we showed that ALG-2 is more abundant and associates proportionately with more miRNAs than in intestinal and muscle cells, when compared to ALG-1. Although it is not clear why ALG-2 is especially relevant to the nervous system, we found that it is required to regulate, whether directly or indirectly, ribosome-associated mRNA levels of a wide array of genes in this tissue type. Because of their multitarget potential, individual miRNAs can act pleiotropically in many biological processes, broadening while at the same time complicating our ability to define distinct functions. As demonstrated with miR-60 and to a lesser degree miR-75 (in addition to the other published examples crossed with our data), defining the cell-type-specific activities of miRNAs can guide interrogation of their biological function. Identifying their direct targets is a goal that can also be aided by knowledge of celltype-specific miRNA activity. Despite many years of active research, algorithmic predictions, and the implementation of a variety of different biochemical and sequencing advances, the identification and confirmation of miRNA targets remains one of the primarily unresolved areas of miRNA research in metazoans. A combination of imperfect complementarity required for the short miRNA seed sequence, and the primary action of miRNAs on translation rather than exclusively mRNA stability provides additional challenges to the accurate prediction and confirmation of miRNA targets. The use of CLIP or CLASH techniques 64,65 can provide great insight into miRNA:target associations, but at present, the lack of spatial resolution and high background associated with these techniques likely hinders their true value in deciphering and accurately predicting miRNA-mediated target regulation. Direct evidence of increased protein levels upon miRNA depletion are limited by case-by-case, single target analyses, the lack of full genome coverage, and cell-type-specific adaptations currently afforded by proteomic approaches [66][67][68] . Cell-type-specific polysome immunoprecipitation by itself could assay differences in steady-state translating mRNAs, either between wild-type and mutant backgrounds as we demonstrated here or by incorporating other cell types to distinguish cell-typespecific translatomes in the future. Despite potentially providing accurate cell-type-specific information on steady-state mRNAs, polysome immunoprecipitation may not accurately pick up miRNA targeting at the translational repression level. That being said, by adding a ribosome footprinting step to the technique 61,69,70 , combined with our miRNA:AGO loading map, we could, in principle, gain significant insights into miRNA target confirmation and downstream regulation at previously unattainable levels of resolution. The transgenic toolkit of strains developed here could also be useful in assessing miRNA and AGO loading in a range of settings not tested in the current study, such as during specific developmental stages, ageing, or under a wide range of environmental conditions, providing insight into miRNA-directed biological processes. Moreover, the universal Mos1-mediated singlecopy insertion vector backbones constructed for this study could be readily modified with alternative promoter sequences to achieve a genome-wide view of miRNA functionality in any cell or tissue of interest in C. elegans. A final more controversial area of miRNA biology, which could potentially be investigated by using an extended version of our data is that of miRNA movement. Although miRNA movement between developmentally distinct cell types has been recently demonstrated in animals 43 , it would appear, and indeed would be logical to presume, that this movement would be more frequent within a subset of cells in a defined tissue type 48,71 . A priori, derivations of our technique could be used to address these issues, and enhance the resolution of our understanding of the spatial and temporal dynamics of miRNAs in animals. Molecular cloning. All plasmids were cloned using a modified version of pCFJ150 using either the 3-way gateway or standard PCR-based techniques. Promoter sequences (eft-3p, ges-1p, rgef-1p, and myo-3p) were amplified using the oligos listed in Supplementary Table 2 (eft3p-F and eft3p-R, ges-1p-F and ges-1p-R, rgef-1p-F and rgef-1p-R, myo-3p-F and myo-3p-R, unc-25p-F and unc-25p-R) and then subcloned into a modified restriction-compatible version of pDONR4-1r (Invitrogen). Full-length cDNA versions of alg-1 and alg-2 incorporating an Nterminal HA tag were amplified using the oligos listed in Supplementary Table 2, and cloned into the entry vector pDONR221 using gateway technology. A PCR product containing SL2:His-58:GFP was amplified using the oligos SL2-2-F and tbb2-3-R and recombined into the entry vector pDONR P2R-P3 using gateway technology. The subsequent entry clones (cell-specific promoters, HA:ALG1/2 and SL2:His58:GFP) were recombined into the destination vector modified pCFJ150. pSZ180 (prgef-1p::FLAG::RPL-18::SL2::HIS-58::GFP::tbb-2 3′UTR) was cloned by amplifying RPL18 with an N-terminal FLAG tag using the oligos F-RPL18-F and RPL18-SL2-R, and substituting the HA:ALG-1 sequence of pSZ86 using this amplicon and the oligos F-RPL18-F and rgef-1p-R. pSZ176 (ALG1::FP-SEC-pDD282) was generated essentially as described in the ref. 73 using the oligos CB-76 and CB-77 for the 5′ HDR and the oligos CB-78 and CB-79 for the 3′ HDR. Both fragments were cloned in the SpeI digested vector pDD282 using Gibson assembly. pSZ178 (alg-1::sgRNA) was generated using reverse PCR-based amplification using the oligos Cas9-sg-ALG1-F and Cas9-sg-ALL-R on the template vector pDD162 to yield a sg-RNA with the sequence; AGCGCUUUCAAUCCCUCUCAUGG. pSZ256 was cloned by amplifying the upstream putative promoter region of miR-77 to the end of the stemloop with the oligos miR-77p-NotI-F and miR-77-NheI-R and cloning into a modified version of the vector pSZ246, which contained a multicloning site upstream of the SL2::HIS-58::GFP::tbb-2 3′UTR expression cassette (amplified with CD-3 and SL2-F), using NotI and NheI. pSZ257 was cloned by amplifying the upstream putative promoter region~4 kb upstream of of mir-238 to the end of the stemloop with the oligos miR-238p-NotI-F and miR-238-NheI-R and cloning into a modified version of the vector pSZ246, which contained a multicloning site upstream of the SL2::HIS-58::GFP::tbb-2 3′ UTR expression cassette (amplified with CD-3 and SL2-F), using NotI and NheI. Transgenic strains. DNA constructs were injected to generate single copy insertion lines using the mosSCI method 24 . A complete list of transgenic strains used in this study is provided in Supplementary Table 1. Transgene insertions were confirmed via genotyping using the oligos sz-6, sz-13, and sz-14 (Supplementary Table 2). ALG immunoprecipitation. Immunoprecipitation of miRNA:AGO complexes was performed essentially as described by 49 but adapted for C. elegans. Briefly, approximately 50,000-100,000 synchronized L4-staged worms were grown on 4 × 100 mm nematode growth medium (NGM) plates seeded with OP50 Escherichia coli bacteria. Worms were harvested and washed three times in 10 mL of M9 buffer in a 15 mL falcon tube. Samples were briefly centrifuged at~500×g for 2 min, before the supernatant was removed. The worm pellets were then flash frozen in liquid N2 and stored at −80°C or used immediately for subsequent experiments. Worm pellets were ground to a fine powder in liquid N2, resuspended and lysed in 2-3 v/v of immunoprecipitation buffer (IP buffer; 50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 10% glycerol, 0.1% NP40) containing 1 tablet/10 mL complete protease inhibitor cocktail (Roche) for~10 min with intermittent mixing by inversion. All subsequent steps were performed at 4°C. Lysates were cleared by centrifugation at 14,000×g for 10 min. Cleared lysates were normalized by protein quantification using a modified Lowery procedure with the DC TM Protein Assay Kit (Bio-Rad). Five to ten percent of this lysate was kept as input fraction. Lysates were then precleared with 15 µL of protein A/G magnetic beads (Pierce Scientific) for 1-2 h with rotation at 4°C. Fifteen microliter of either FLAG or HA (depending on transgenic strain background) conjugated beads were subsequently added to the mixture, followed by 2-3 h incubation with rotation at 4°C. The beads were washed two times for 15 min each in ice cold IP buffer, followed by two subsequent washes in "high salt" IP buffer (containing 300 mM NaCl), also for 15 min each. After these washes, TRIzol reagent (Invitrogen) was added to the beads and RNA extracted from the aqueous phase and protein from the organic phase, according to manufacturer's instructions. For FLAG-tagged CRISPR lines, immunocomplexes were eluted by vigorous shaking in 150 µL of lysis buffer supplemented with 200 ng/µL of FLAG peptide (Sigma) at 4°C for 30 min. RNA and protein were subsequently extracted as described above. Novel candidate miRNAs were annotated using miRDeep2 (v2.0.0.8) as described 80 . To ensure robust identification of novel miRNAs, only candidates with a valid pri-miRNA hairpin structure and score > 3 were considered for further validation and analysis. Novel miRNA target prediction was performed by running miRanda (v3.3a) 81 in a local environment (Ubuntu 16.04.5 LTS), where mature novel miRNA sequences were compared to the C. elegans reference genome (WBcel235). Hits were validated by observing binding energy and base matching between 5′ region of novel miRNAs and known C. elegans mRNAs. RNA gel blot analysis. Total or immunoprecipitated RNA was separated on 17.5% polyacrylamide-urea denaturing gels, then transferred to Hybond-NX nitrocellulose membranes (GE Healthcare), and chemically cross-linked via 1-ethyl-3-(3dimethylaminopropyl) carbodiimide-mediated cross-linking 81 . Oligonucleotides used for probes were complements of the respective miRNA sequences, and were end-labeled using T4 PNK (Thermo Scientific) with [γ-32P] dATP. The sequences of all probes are listed in Supplementary Table 2. Real-time qRT-PCR analysis. Total (input) or AGO-immunoprecipitated RNA was reverse transcribed 82 . Briefly, 100-500 ng of input or IP RNA was reverse transcribed in a final volume of 10 µL containing 2 µL 5× ProtoScript II reaction buffer (NEB), 25 µM ATP, 25 µM dNTPs, 50 µM RT primer (IK-44), 1 unit of poly (A) polymerase (Invitrogen) and 20 units of protoscript II reverse transcriptase (NEB). Reactions were incubated at 42°C for 1 h followed by enzyme inactivation at 95°C for 5 min. Real-time quantitative reverse transcriptase PCR (RT qPCR) was performed using a LightCycler 480 II (Roche) with SensiFAST SYBER No-ROX (Bioline Meridian Biosystems) using the gene-specific primers listed in Supplementary Table 2. PCR was carried out in technical triplicates using the following cycling conditions: 95°C for 3 min, followed by 45 cycles of denaturation at 95°C for 10 s, annealing at 60°C for 10 s, and elongation at 72°C for 20 s. A melting curve was generated at the end of the amplification in every run to confirm primer specificity. Threshold cycle (C t ) values were determined by calculating the second derivative maximum of three technical triplicates for each sample. Data were analysed using Prism-GraphPad Software v8.4.0. Western blot analysis. Total proteins were extracted via lysis during immunoprecipitation experiments or by boiling 50-100 staged worms in 1× sample buffer. Proteins were resolved on SDS-PAGE gels, and electro-transferred to Immobilon-P PVDF membranes (Millipore). After blocking for 30 min in 1× PBS + 0.1% Tween-20 supplemented with 5% skim milk powder or 3% BSA, subsequent antibody incubations were carried out overnight at 4°C in the same solution. Primary anti-HA (Sigma H6533) and anti-FLAG (Sigma F3165) antibodies were diluted 1/5000. Membranes were washed four times in 1× PBS + 0.1% Tween-20, and then incubated for 1 h at room temperature with horseradish peroxidase-conjugated goat anti-rabbit (Abcam ab6721) or goat anti-rat (Cell Signaling 7077S), diluted 1/10,000. After washing again four times in 1× PBS + 0.1% Tween-20, detection was performed using the ECL Western Blotting Detection Kit (GE Healthcare) and revealed either by exposure to film or using the ChemiDoc TM Touch imaging system (Bio-Rad). Equal loading was confirmed either by using alpha-tubulin (Sigma T6074) as described above diluted 1/5000, or by staining the membranes with Coomassie blue. Live imaging. Transgenic animals carrying fluorescent transcriptional or translational fusion reporters were immobilized in 1 mM of levamisole (Sigma) and mounted on a 2% agarose pad attached to a glass slide. Fluorescence was visualized using either a Zeiss Z2 imager microscope equipped with a Zeiss Axiocam 506 mono Camera with Zen2 (version 2.0.0.0) software or a Zeiss 780 confocal microscope. The images shown in all figures are representative of consistent results obtained in multiple independent experiments. Recovery from starvation assays. Gravid egg-laying adult animals were bleached and eggs allowed to hatch in M9 medium with rotation. L1 worms were starved for 72 h (or not, for non-starved animals) in M9 with rotation prior to seeding, and recovery on NGM plates seeded with OP50 E. coli and grown until wild type animals reached L4 (~40 h) or young adults (~48 h) at 20°C. Body areas of either L4 or adult worms were measured after washing the animals onto NGM plates containing no food. Videos of~5 s were recorded using a Nikon SMZ745T stereomicroscope and a TrueChrome IIS camera (Tucsen). Animal body area was quantified using WormLab tracking software (MBF Bioscience). Oil Red O staining. Starved L4 stage worms were collected in 1× PBS + 0.1% Tween-20 and washed a minimum of two times in the same solution. After the final wash worms were left in 400 µL of 1× PBS + 0.1% Tween-20 to which 500 µL of 2x MRWB (160 mM KCl, 40 mM NaCl, 14 mM EGTA, 0.4 mM Spermine, 30 mM PIPES (pH 7.4), 0.2% 2-mercaptoethanol) and 100 µL 20% paraformaldehyde were added. Fixation was performed with rotation for 1 h at room temperature. Fixed worms were subsequently washed twice in 1 mL of 100 mM Tris-HCl (pH 7.4). Worms were resuspended in 100 µL of 100 mM Tris-HCl (pH 7.4) and combined with 900 µL of reduction buffer (100 mM Tris-HCl (pH 7.4), 10 mM DTT). After mixing by inversion for 30 min, worms were washed once with 1× PBS and aspirated to 300 µL. Seven hundred microliter of isopropanol was added and samples were mixed with rotation for 1 h. Worms were pelleted and resuspended in Oil Red O solution (0.5 g Oil Red O in 100 mL isopropanol) diluted 1.5× in water and filtered through a 0.2 µM unit. Samples were incubated with rotation for 2 h at room temperature and then washed twice with 1× PBS + 0.1% Tween-20 prior to imaging. Quantification was performed using Image J fluorescence intensity on at least 10 independent worms.
2021-04-15T06:16:27.083Z
2021-04-13T00:00:00.000
{ "year": 2021, "sha1": "e55cd1181db515a6c25a69c21b3b421292b25e90", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-22503-7.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "63d56c1445cb3fd8529b90627600e7d16196740c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12797291
pes2o/s2orc
v3-fos-license
Combined effects of p-coumaric acid and naringenin against doxorubicin-induced cardiotoxicity in rats Background: Doxorubicin (DOX) is the most active cytotoxic agents having efficacy in malignancies either alone or combined with other cytocidal agents. The clinical usefulness of the anthracycline drug has been precluded by cardiac toxicity. Many therapeutic interventions have been attempted to improve the therapeutic benefits of the drug. This study is based on the possible protective effects of combination of p-coumaric acid (PC) and naringenin (NR) on DOX induced cardiac toxicity in male Swiss albino rats. Methods: Total nine groups of Swiss albino rats were used, Group I (vehicle control) receive saline solution daily and Group II (disease control) receive saline solution daily up to 29th day and at 30th day a single dose of DOX (15 mg/kg i.p.) is given. PC alone (100 mg/kg/day p.o.) and (200 mg/kg/day p.o.) also NR alone (15 mg/kg/day) orally administer for 30 days. Similarly a standard drug Vit. E (100 mg/kg/day) administers alone for 30 days. Group PC/DOX and PC and NR/DOX receive PC (200 mg/kg/day) and combine PC (200 mg/kg/day). Results: Doxorubicin induced marked biochemical alterations characteristic of cardiac toxicity including increase in MDA level and decrease SOD, CAT & GSH level but prior administration of combination of PC & NR ahead of doxorubicin challenge ameliorated all these biochemical markers. Conclusion: The study proves the beneficial effects of combination of PC and NR in protecting animal against DOX induced cardiotoxicity. INTRODUCTION Doxorubicin (DOX) is one of the most active anthracycline antibiotics that has been used for long time in the therapy of array of human malignancies such as, hematopoietic, lymphoblastic, [1] and solid tumors [2] either alone or combination with other cytocidal agents. [3] The clinical usefulness of DOX, however, has been hampered by its detrimental cardiac toxicity, [4] so the cardiac protection during the use of DOX in the treatment of cancer can be achieved by limiting its cumulative dose. Several in vivo-in vitro studies have demonstrate that reactive oxygen metabolites including free radical species, superoxide anions (O -• ), hydrogen peroxide (H 2 O 2 ), and hydroxyl radicals ( • OH) are the important mediators of tissue injury. [5] The involvement of oxygen radical injury of membrane lipid has been reported as the main causative factor for DOX-induced cardiotoxicity. [6][7][8][9] DOX form semiquinone free radicals by reducing one electron, [6][7][8][9] this free radical donates its electrons and forms superoxide anions. [10] The dismutation of superoxide yields hydrogen peroxide (H 2 O 2 ). [11] Under biological conditions semiquinone reductively cleaves hydrogen peroxide to produce hydroxyl radicals which is the most reactive and destructive species. This leads to lipid peroxidation causing irreversible damage of the membrane structure and function. [12] Metabolic machineries of heart tissue are very active and antioxidant resources are very low in this organ compared with other organs in the body, made heart quite vulnerable to free radical damage by DOX [13] which ultimately leads to cardiotoxicity. In this study, we estimate the effect of combination of two drugs, i.e. p-coumaric acid (PC) and naringenin (NR) against cardiotoxicity caused due to the DOX use in the treatment of cancer. PC is the phenolic acid widely distributed in plants and forms the part of human diet. [14] Sources of phenolic acid are peanut, tea, coffee, wine, chocolate, beer, etc. [15] The antioxidative mechanism of phenolic acid includes binding of metal ions, scavenging of reactive oxygen species (ROS), reactive nitrogen species (RNS), or other precursors and upregulation of endogenous antioxidant enzyme or the repair of oxidative damage to biomolecules. [16] Recently interests in food phenolics have increased due to their role of antioxidants and scavengers of free radicals and their implication in the prevention of many pathological diseases such as cardiovascular [16,17] and certain types of cancer. [18] Also the other drug used in the study, i.e. NR is fl avonoids from the class of benzogamma pyron having high pharmacological potency. Sources of NR are grape fruit, orange, and lemon. Due to their free radical scavenging and ion chelating properties [13] fl avonoids can be consider as a possible potential protector against DOX-induced cardiotoxicity. In this study, the cardioprotective effect of combination of both drugs in male Swiss albino rats challenge with a single cumulative dose of DOX. Also with this, the effect of this combination on antioxidant enzymes such as SOD (superoxide dismutase), CAT (catalase), GSH (glutathione), MDA (malondialdehyde) is also determined. Animal selection Fifty-four Swiss albino rats, weighing 180-200 g, were obtained from our animal breeding facility of Sudhakarrao Naik Institute of Pharmacy, Pusad. Rats were maintained in our facility under standard laboratory conditions (the photoperiod was 12 h artifi cial light and 12 h darkness, at 20-23°C with humidity 65-67%) and all the pharmacological experimental protocols were approved by the Institutional Animal Ethics Committee (Reg no: SNIOP/264/03c/ CPCSEA, Feb 2009). Experimental design Animals were divided into nine groups with six animals in each group. It is a 30-day study in which Group I (vehicle control) receive 1 ml saline solution daily, Group II receive saline water up to 29 th day and at 30 th day a single dose of DOX Treatment schedule The treatment schedule is described in Table 1. Estimation of superoxide dismutase Twenty milligrams of heart tissue homogenised in potassium phosphate buffer (50 mM/l, pH 7.4) at 10,000 rpm at 4° C in cooling centrifuged for 20 min of supernatant use to measure the activity of SOD. SOD activity was determined by assessing inhibition of pyrogallol autooxidation, pyrogallol (24 mM) was prepared in 10 mM HCl and kept at 4°C before use. Aliquots of supernatant were added to Tris-HCl buffer (pH 8.5) containing 25 μl pyrogallol and then mixed thoroughly and changes in the absorbance were recorded at 1 min interval for 3 min. [19][20][21] Estimation of catalase A heart tissue homogenate was prepared in potassium phosphate buffer (50 mM/l, pH 7.4) with the ratio of 1:10 w/v. The homogenate was centrifuged at 10,000 rpm at 4°C in the cooling centrifuged for 20 min. CAT activity in the tissue homogenate was assayed according to the method of Clairborne (1985). In this method, the decomposition of H 2 O 2 (19 mM/l) due to the CAT activity was assayed by the decrease in absorbance of H 2 O 2 at 240 nm. [22,23] Estimation of glutathione GSH accounts for the majority of soluble reduced sulfhydryl in the cell. [23,24] GSH in cardiac tissue was determined by measuring the total soluble sulfhydryl content. GSH was determined by utilising 5,5-dithio-bis(2-nitrobenzoic acid) (Ellman reagent). The homogenised heart tissue in 0.02 M EDTA was mixed with water and 50% tri-chloro acetic acid, the tube was shaken for 10-15 min, centrifuged at 3000 rpm for 15 min. In the supernatant, sulfhydryl concentration was determined photometrically following the method of Ellman. [23,24] Estimation of lipid peroxidation malondialdehyde (TBARS) The measurement of heart lipid peroxide by a calorimetric reaction with thio-barbituric acid was done as described by Okhawa et al. In this, 10% tissue homogenate, 30% tri-chloroacetic acid, and 0.8% thio-barbituric acid were added, this tube was covered with aluminium foil and kept in a shaking water bath for 30 min at 80°C, then kept that tube in ice cold water at 30°C and centrifuged at 3000 rpm for 15 min. [25,26] Absorbance of the supernatant was noted at room temperature against a appropriate blank (blank consist of 1 ml distilled water, 0.5 ml 30°C tri-chloroacetic acid and 0.5 ml of 0.8% thio-barbituric acid). Statistical analysis All the experimental results are given as the mean ± SEM. Comparison between experimental and control groups were performed by ANOVA, followed by Dunnett's test for post hoc comparison, when appropriate. A value of P < 0.05 was considered to be signifi cant, while P > 0.05 was non-signifi cant. Effect of p-coumaric acid and naringenin on doxorubicininduced tissue superoxide dismutase activity The SOD activity showed a signifi cant (P < 0.01) decrease in Group II when compared with Group I. Groups IV and VIII showed a less signifi cant (P < 0.05) increase where as Groups VI and IX showed a signifi cant (P < 0.01) increase in SOD activity when compared with Group I. However, Groups III, IV and VII showed a less signifi cant (P < 0.05) increase and signifi cant (P < 0.01) increase in SOD activity of Groups VI, VIII and IX when compared with Group II. Effect of p-coumaric acid and naringenin on doxorubicininduced tissue catalase activity Group II showed a signifi cant decrease in CAT activity when compared with Group I (P < 0.01). Groups III, VII, VIII and IX showed a less signifi cant (P < 0.05) increase in CAT activity where as Groups IV and VI showed a signifi cant (P < 0.01) increase in CAT activity when compared with Group I. Groups III and VIII showed a less signifi cant (P < 0.05). Groups IV, VI, VII and IX however showed the signifi cant (P < 0.01) activity when compared with Group II. Effect of p-coumaric acid and naringenin on doxorubicininduced tissue glutathione activity The blood GSH levels of disease control, i.e. DOX-treated group (Group II) showed a signifi cant (P < 0.01) decrease when compared with normal control (Group I). Groups IV and VI exhibit a less signifi cant (P < 0.05) increase in the GSH level when compared to Group I. While in Groups IV, VIII and IX GSH levels were less signifi cant (P < 0.05) and in Group VI it was restored signifi cantly when compared with Group II. Effect of p-coumaric acid and naringenin on doxorubicininduced tissue thiobarbituric acid reactive substance levels The TBARS concentration of Group II showed a signifi cant (P < 0.01) increase as compared to Group I. VI shows a signifi cant (P < 0.05) decrease in TBARS concentration compare to Groups I and II. Groups IV, VIII and IX showed a less signifi cant (P < 0.05) decrease in TBARS concentration while Group VI exhibited a signifi cant (P < 0.01) decease when compared with Group II. N = number of animals used in each group. Treatment duration = 30 days; 24 hrs following last administration the animals were killed by cervical dislocation and heart of each animal wash rapidly dissected and washed with ice-cold 0.15 mM KCl to remove excess blood, then plotted between two fi lter papers and transfer into preweight vial to determine wet weight. Homogenisation of each heart carried out on ice using tissue homogeniser. Diff erent aliquots of heat were prepared and kept on ice for carrying out assay DISCUSSION DOX is a broad spectrum antibiotics used as a chemotherapeutic drug for the treatment of different forms of human neoplastic disease. [27] However, the clinical use of anticancer drug is greatly limited by its dosedependent cardiotoxicity. [28] Free radicals generation and lipid peroxidation have been suggested to be responsible for DOX-induced cardiac toxicity. [29,30] These oxygen derived radicals causes severe damage to plasma membrane and interferes with cytoskeleton assembly. [31] Free radicals ROS and RNS are generated by our body by various endogenous systems, exposure to different physiochemical conditions, or pathological states. A balance between free radicals and antioxidants is necessary for proper physiological function. If the free radicals overwhelm the body's ability to regulate them, a condition known as oxidative stress ensues. Free radicals thus adversely alter lipids, proteins, and DNA and trigger a number of human diseases. Hence, application of an external source of antioxidants can assist in coping this oxidative stress. [32] Among all the therapeutic modalities adopted to attenuate DOX cardiac myopathy provide the most promising results from combining the drug with a myriad of antioxidants in an attempt to abate oxidative damage in heart tissue and hence to abrogate the cardiac injury. The present work is designed to investigate the potential cardioprotective effect of the combination of PC and NR against DOX-induced cardiotoxicity. DOX-induced cardiotoxicity includes one electron reduction of DOX lead to the formation of corresponding semi-quinone free radicals in cardiac monocytes by myocardial CYP-450 and fl avin monoxigenase. In the presence of oxygen, these free radicals rapidly donate their electron to oxygen or react with molecular oxygen and initiate cascade of reaction producing ROS. Free radical generation and lipid peroxidation have been suggested to be responsible for DOX-induced cardiac toxicity. [29,30] Moreover, heart tissue is especially susceptible to the free radical injury because of the low level of free radical detoxifying enzymes such as SOD, CAT, and GSH and less oxygen reserve. Further, DOX also has a high affi nity for the phospholipids component of mitochondrial membrane in cardiac myocytes, leading to accumulation of DOX in heart tissue. The cellular GSH level is closely related to lipid peroxidation and disturbances of Ca ++ infl ux induced by toxic agents. DOX administration induced oxidative stress in cardiac tissue as manifested by the alteration observed in the cardiac antioxidant defence system both enzymatic and nonenzymatic. Anthracycline drug reduces signifi cantly the cardiac lipid peroxidation as manifested by increased MDA level. The modulation of antioxidant enzyme activities followed by DOX administration has been discussed in many studies. [6][7][8][9] The association between elevated cardiac content of MDA and lowered cardiac content of GSH found in the study strongly proves the oxidative damage caused by DOX. This observation has been supported by the fi ndings of Lazzarino et al. and Gustafson et al. (1986) who reported that the cardiac content of MDA was increased and GSH content was decreased by administration of DOX to rodents. It is well documented that long-term treatment by DOX causes irreversible, severe, and potentially life-threatening cardiac damage. [33] The mechanism involves in such toxicity have been documented by many investigators. The involvement of oxygen free radicals oxidative stress have been strongly accepted as crucial factors in the pathogenesis of DOX-induced cardiac damage. p-Coumaric acid is the phenolic compound widely distributed in the plant and forms a part of human diet. [34] The mechanism of PC includes binding of metal ions, scavenging of ROS, RNS, or their precursors, upregulation of endogenous antioxidant enzymes, or the repair of oxidative damage to biomolecules. [16] Abdel-Wahab et al. explains that PC shows potential cardioprotective effects against DOX-induced oxidative stress in rat's heart. Naringenin includes in the class of fl avonoids that has a multitude of pharmacological effects. Due to their free radical scavenging activity and iron chelation properties, this drug considers as possible potential protector against DOX-induced cardiac toxicity. NR is the aglycon of the natural glycoside and presents abundantly in grapefruits. Other action includes antithrombotic, [35] antiinfl ammatory, [36] antiestrogenic [37] as well as chemopreventive action. [38] Hossam et al. (2005) investigated that NR use as cardioprotective against DOX induced cardiotoxicity. The combined effect of PC and NR is may be because of their synergistic effects. This combination may act as a hydrogen-donating radicals scavenger by scavenging lipid alkoxyl and peroxyl radical and protect myocardium from DOX-induced injury. With reference to Table 1, prior administration of both PC (200 mg/kg) and NR (15 mg/kg) in combination for 30 days and followed by a single dose of DOX (15 mg/ kg) at 30 th day was given and then cardioprotective action of combination determined by measuring biochemical parameters such as superoxide dismutase (SOD), catalase (CAT), glutathione (GSH), and malondialdehyde (MDA) as revealed in Figure 1. Pre-treatment of animal with PC and NR modulates oxidative damage induced by DOX administration. CONCLUSION In this study, the Swiss albino rats pretreated with combination of drugs, i.e. PC and NR ahead of a single dose of DOX and then determined the effect of this combination of drugs on biochemical parameters such as SOD, CAT, GHS, and MDA. A result shows that prior administration of drugs leads to ameliorate all biochemical parameters. The study proves the benefi cial effects of combination of natural drugs namely PC and NR in protecting the animal against DOX-induced cardiac oxidative damage. The protecting effect of PC and NR is due to free radical scavenging and iron-chelating properties, hydrogendonating radicals, scavenger by the scavenging lipid alkoxyl and peroxyl radical. On the basis of our fi ndings, it may be worthy to suggest the concomitant administration of combined dose of PC and NR prior to the DOX use in cancer chemotherapy. = p-coumaric acid, Vit = Vitamin, NR = Naringenin. *P < 0.05 is less signifi cant as compared to Group I; ** P < 0.01 is signifi cant as compared to Group I; + P < 0.05 is less signifi cant as compared to Group II; ++ P < 0.01 signifi cant as compared to Group II. Comparison between experimental and control groups were performed by ANOVA followed by Dunnett's test for post hoc comparison, when appropriate.
2018-04-03T03:00:11.230Z
2011-07-01T00:00:00.000
{ "year": 2011, "sha1": "ab39c206de7550269533b51a6fd96396ebc25b63", "oa_license": "CCBYNCSA", "oa_url": "https://www.phcogres.com/sites/default/files/PharmacognRes-3-3-214.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ca003328a34027784d76776fcf8a92192f600369", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260757951
pes2o/s2orc
v3-fos-license
VITAMIN D AS A MARKER OF NON-ALCOHOLIC FATTY LIVER DISEASE IN PATIENTS WITH TYPE 2 DIABETES MELLITUS Ninety patients with type 2 diabetes mellitus (DM) aged 39 to 76 years were examined. Women were 71 (78.9 %), men – 19 (21.1 %). The diagnosis of non-alcoholic fatty liver disease (NAFLD) was made based on an ultrasound examination of the abdominal cavity. The levels of vitamin D in the blood were determined by enzyme immunoassay. NAFLD was detected in 66.7 % of patients with type 2 DM. Vitamin D levels in the blood of patients with NAFLD on the background of type 2 DM were lower, and its deficiency was more common than in cases of DM without NAFLD. Serum vitamin D levels in patients with NAFLD and cholestatic syndrome were lower than in cases of NAFLD without signs of cholestasis. According to regression analysis, the development of NAFLD in type 2 DM is affected by reduced vitamin D levels in the blood and increased values of glucose, alanine aminotransferase, and low-density lipoproteins. Thus, the development of NAFLD against the background of type 2 DM is associated with a decrease in vitamin D concentration in the blood. Serum level of vitamin D less than 16.18 ng/ml can be used as a predictor of NAFLD in patients with type 2 DM. Vitamin D is essential in maintaining calcium homeostasis in the body through a receptor belonging to the nuclear hormone receptor superfamily.In recent decades, particular attention has been paid to the nonclassical effects of vitamin D, such as immune modulation, effects on hormonal secretion, and participation in cell differentiation and proliferation, which provide antioxidant protection and reduction of inflammation and fibrosis [5]. It is hypothesized that vitamin D deficiency coexists with NAFLD, given their associations with obesity and type 2 DM.The point of view that there is a causal relationship between hypovitaminosis D and metabolic pathology of the liver is gaining more recognition [6]. Thus, in patients with NAFLD, a reduced content of vitamin D in the blood was determined, and its deficiency (less than 20 ng/ml) occurred in 70 % of the cases [7].Decreased blood levels of vitamin D (less than 30 ng/ mL) were detected in 70.1 % of patients with steatosis, 89.7 % with non-alcoholic steatohepatitis, and 84.6 % with liver cirrhosis [8].A decrease in serum vitamin D values was noticed in NAFLD patients with type 2 DM [9][10][11], especially in cases of severe liver fibrosis [11]. However, there is an opinion that the concentration of vitamin D in the blood, as well as the frequency of its deficiency in patients with NAFLD, does not differ from the total population [12,13], although the results of some meta-analyses refute this data [14].The role of vitamin D in predicting NAFLD in patients with type 2 DM has not yet been determined. The study aimed to evaluate the predictive value of blood vitamin D levels in patients with NAFLD and type 2 DM. Material and Methods.The study included 90 patients with type 2 DM aged 39 to 76 (mean age 58.79±8.59years).Women were 71 (78.9 %), men -19 (21.1 %).Inclusion criteria: type 2DM; age over 18; signed informed consent to participate in the study.Exclusion criteria: Type 1 DM; gestational diabetes; liver diseases other etiologies; alcohol consumption in hepatotoxic doses; acute infections in the last three months; chronic somatic disorders in the aggravation or decompensation stage; organ pathology affecting phosphorus calcium metabolism; drug addiction; pregnancy and lactation; malignant neoplasms; consumption of calcium, vitamin D, glucocorticosteroids, bisphosphonates, other drugs, that have affected the metabolism of vitamin D over the past three months. The duration of DM did not exceed 9.5 (5.0; 15.0) years.The mean values of BMI, glycosylated haemoglobin, glucose, ALT, AST, GGT, total bilirubin, total cholesterol, HDL, LDL, triglycerides were 32.78±4.46kg/m 2 The diagnosis of NAFLD was based on abdominal ultrasonography, which is highly sensitive and specific and is a sufficient diagnostic criterion when excluding other causes of liver pathology [15]. The content of vitamin D in blood serum (SCVital Development Corporation, Russia) was determined by ELISA. The patients signed an informed consent to participate in the research, confirmed by the university's ethics committee. The results were statistically processed using StatTech v. 3.0.9(Stattech Pvt. Ltd., Russia).Quantitative parameters with normal distribution were described using the mean (M) and standard deviation (SD).Without a normal distribution, parameters are presented as a median (Me) and the lower and upper quartiles (Q1; Q3).To identify differences, the Student's t-test and Mann -Whitney test were used.The odds ratio (OR) and its 95 % confidence interval (CI) were calculated.ROC analysis was performed to determine the diagnostic value of the indicators, and the information content of the test was assessed using the area under the ROC curve.The value of several parameters in the NAFLD prediction was estimated by logistic regression.Sensitivity, specificity, positive and negative predictive values, and accuracy were calculated.The discrepancies were considered statistically significant at p<0.05. Results and Discussion.NAFLD was detected in 60 (66.7 %) patients with type 2 DM; in 30 (33.3 %) cases, there were no signs of liver pathology.Patients with NAFLD on the background of type 2DM had higher glucose levels, glycosylated haemoglobin, ALT, AST, GGT, total cholesterol, LDL, and very low levels of HDL in the blood (Table 1).Triglycerides in this cohort of patients tended to increase.The blood concentration of vitamin D in patients with type 2 DM was decreased and amounted to 15.4 (10.94; 29.98) ng/ml.Serum levels of vitamin D in type 2 DM patients with NAFLD were statistically significantly lower (12.00 (8.53; 16.20) ng/ml) than in diabetic patients without NAFLD (32.33 (25.28; 41.05) ng/ml; p<0.001). Vitamin D deficiency in patients with NAFLD was more common (81.7 and 16.7 % of cases, respectively), and normal values of vitamin D were rare (6.6 and 56.6 % of cases, respectively) than in diabetic patients without NAFLD.Vitamin D deficiency in both groups was determined in 11.7 % and 26.7 % of cases, respectively (Fig. 1).Blood levels of vitamin D in patients with NAFLD in combination with cytolysis were lower (11.63 (6.86; 15.03) ng/ml) than in patients without increased aminotransferase activity (13.07 (9.77; 22.64) ng/ml), but the differences did not reach statistical significance (p>0.05). Threshold levels of vitamin D in the blood less than 16.18 ng/ml were associated with an increased risk of NAFLD in patients with type 2 DM (OR = 87.0;95 % CI (10.89; 694.55)) and were highly sensitive and specific (Table 2).The area under the ROC curve was 0.895±0.041(p<0.001).Using multiple logistic regression, the significance of 16 parameters was assessed (age, gender, duration of DM, presence of arterial hypertension, body mass index values, activity of AST, ALT, GGT, total bilirubin, glucose, glycosylated haemoglobin, total cholesterol, triglycerides, HDL, LDL, vitamin D in the blood) in the prediction of NAFLD in patients with type 2 DM.According to regression analysis, the development of NAFLD in this population of patients is prognostically significantly affected by serum levels of vitamin D, glucose, ALT, and LDL.The chances of having NAFLD decrease by 1.094 times with an increase in vitamin D by one ng/ ml and, on the contrary, increase with an elevation in glucose by one mmol/l (1.377 times), ALT by one U/l (1.067 times), LDL per 1 mmol/l (2.208 times) (Fig. 2).The study showed that in NAFLD patients with type 2 DM, there is a decrease in the blood content of vitamin D and an increase in the frequency of its deficiency (81.7 %), which coincides with previously obtained data [9][10][11]. Vitamin D deficiency is thought to be unrelated to NAFLD but is associated with increased vitamin D content in fatty tissues during obesity, as well as a sedentary lifestyle and less contact with sunlight in patients, and high-level calorie food with low content of minerals and vitamins [8,16].However, obesity does not explain vitamin D deficiency in NAFLD, as some studies have shown that vitamin D levels in the blood have also been reduced in non-obese patients [16]. The high risk of NAFLD associated with hypovitaminosis D persisted even after excluding such a factor as visceral obesity [13]. Several ways exist to understand the pathogenetic relationship between vitamin D deficiency and NAFLD.First, vitamin D reduces insulin resistance of peripheral tissues and hepatocytes, so its deficiency leads to hepatic steatosis.In the NAFLD model, adding vitamin D reduces glucose and insulin levels, and triglycerides in the liver.The protective effect was associated with the activation of the vitamin D receptor, which increased the expression of hepatocyte nuclear factor 4α (controls the expression of triglyceride transport genes) [17]. Secondly, vitamin D influences hormone secretion (increases adiponectin production, inhibits resistin and renin activity), immune response (reduces the release of pro-inflammatory mediators), and cell proliferation.In the model of NAFLD, vitamin D deficiency led to an increase in hepatic expression of the resistin gene, genes of acquired (interleukins 1, 4, 6) and innate immunity (TLR-2, -4, -9), which was accompanied by an increase of fat in the liver, parameters of lobular inflammation and NAFLD activity score [18]. In addition, the activation of the vitamin D receptor signifies the pathway of the presses of oxidative stress soup and the proliferation of hepatic stellation cells, which improves fibrosis in NAFLD.For example, in vitamin D-deficient rats, hepatic expression of the hemoxygenase-1 gene, a marker of oxidative stress involved in fibrogenesis, was increased due to increased fibrosis severity in the NAFLD [18] model. Внутренние болезни Finally, an important role of the gut microbiota in the pathogenesis of NAFLD may be associated with vitamin D deficiency.The microbiome's interaction with intestinal epithelial cells is mediated by the TLRs expressed on them.Vitamin D deficiency is responsible for increased expression of TLR-2, -4, and -9, followed by an increase in endotoxin exposure to the liver, which contributes to the development of NAFLD [5]. According to our data, blood levels of vitamin D less than 16.18 ng/ml were associated with an increased (87fold) risk of NAFLD in patients with type 2 DM and were characterized by high sensitivity and specificity (75.0 and 96.7 %, respectively).Previously, it was noticed that NAFLD is negatively associated with vitamin D levels, and the risk of NAFLD occurrence increased by 20-30 % in cases of vitamin D deficiency [13,19].Blood levels of vitamin D less than 11 ng/ml predicted the presence of NAFLD in the general population with a sensitivity of 45 % and a specificity of 98 % [7]. Our regression analysis showed that the development of NAFLD in patients with type 2 DM is affected by serum levels of vitamin D, glucose, ALT, and LDL.In other studies, according to binary logistic regression, NAFLD risk factors in type 2 DM were age, HOMA-IR values, BMI, GGT, cystatin C, HDL, and vitamin D in the blood [7,10,11]. Thus, the formation of NAFLD against the background of type 2 DM is associated with a reduced vitamin D content in the blood.The relationship between an increased risk of developing NAFLD with reduced vitamin D levels indicates its pathogenetic significance in the onset of the disease.Conclusions 1.In patients with NAFLD on the background of type 2 DM, there is a decrease in the levels of vitamin D in the blood, especially in cases of cholestatic syndrome. 2. The risk of developing NAFLD in patients with type 2 DM is increased 87 times, with blood levels of vitamin D less than 16.18 ng/ml. 3. Predictors of the formation of NAFLD in patients with type 2 DM are parameters of vitamin D, ALT, glucose, and LDL. Disclosure: The authors declare no conflict of interest. Fig. 2 . Fig. 2. Factors influencing the development of NAFLD in type 2 DM: odds ratio with 95 % CI original research Internal diseases ОРИГИНАЛьНЫЕ ИССЛЕДОВАНИЯ Внутренние болезни N AFLD is one of the most common chronic liver diseases and includes a wide range of conditions such as steatosis, non-alcoholic steatohepati- tis, fibrosis, cirrhosis, and hepatocellular carcinoma, which are based on the accumulation of triglycerides in hepatocytes without alcohol abuse [1, 2]. D AS A MARKER OF NON-ALCOHOLIC FATTY LIVER DISEASE IN PATIENTS WITH TYPE 2 DIABETES MELLITUS.
2023-08-10T15:11:19.269Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "6e96ea5ac523e74d05692d27296f6d1aaff577dc", "oa_license": null, "oa_url": "https://medvestnik.stgmu.ru/files/articles/1378.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5f93981fc69db50a4eba6c3a0accfa82f9565796", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
247200254
pes2o/s2orc
v3-fos-license
COVID-19 Concern and Stress in Bangladesh: Perceived Social Support as a Predictor or Protector The insidious coronavirus disease-2019 (COVID-19) has been a global public health concern affecting almost everyone physically and/or psychologically. The psychological consequences like concern about COVID-19 and increased perceived stress are primarily results of preventive measures like social distancing, lockdown, etc. The present study examined whether perceived social support predicts stress or lessens the effect between concern and stress during social distancing. More specifically, we tested whether (a) the greater social support is associated with lesser perceived stress, and (b) the greater an individual perceives social support, the weaker will be the concern-to-stress relationship (a prediction from buffering hypothesis). We utilized the data from the Bangladeshi respondents (n = 204, 54% males) as part of the COVIDiSTRESS global survey. The three-step hierarchical regression analysis revealed social support as a predictor of stress along with coronavirus concerns rather than protector. The findings have implications for professionals (in providing psychological support to vulnerable people), policymakers (in implementing steps in the future that would less impact on perceived social support), and future researchers (in solving the ultimate role of social support to the association between fear and stress). Introduction The COVID-19 (WHO, 2020) has set the ground for a socio-psychological and economic crisis by putting most parts of the world in lockdown. The infected people suffer from fear of death; the quarantined people suffer from fear of infection, isolation, loneliness, anger, depression, anxiety, and stress (Khalaf, 2020). The overall conditions serve as stressors due to the fear of contracting the disease, heightened anxiety and uncertainty about the future, lack of supplies, and financial losses (Bao et al., 2020;Brooks et al., 2020;Garfin et al., 2020;Keeter, 2020;Wang et al., 2020). These stressors may increase the risk of clinical effects and foster feelings of isolation, loneliness, frustration, anger, anxiety, confusion, or boredom (Liu et al., 2020;Wang et al., 2020). It is a general prediction that any contagious epidemic manifestation has a deleterious effect on individuals and society (Duan & Zhu, 2020). The rise of the COVID-19 and its outcomes has led to fears, worries, concerns, and anxiety among individuals worldwide (Ahorsu et al., 2020). During the COVID-19 outbreak in Bangladesh, several factors such as population density, poverty and limited resources, social structure, cultural norms, and environmental factors have exacerbated a complex fear, socioeconomic crisis, and mental stress among people (Shammi et al., 2020). The country also has been facing other epidemics (e.g., panic buying, stigma, fear, and hatred) in the lockdown of the COVID-19 pandemic (Shammi et al., 2020). Psychologists are always looking for interventions to reduce stress, depression, and anxiety as these are the most prevalent and global psychological problems among people (Bilgel & Bayram, 2014;Bukhari & Khanam, 2015;Kessler & Bromet, 2013). In a more global understanding, the term stress may result as a cumulative response to events or life situations experienced as threatening and otherwise demanding (Cohen et al., 1983;Robinson, 2018a, b). It is anything that places strong demands on individuals that creates an imbalanced state in individuals' mindsets. It can be defined as "a pattern of cognitive appraisals, physiological responses, and behavioral tendencies that occurs in response to a perceived imbalance between situational demands and the resources needed to cope with them" (Passer & Smith, 2009). Individuals may start to experience stress if a given event is assessed as incriminating or exceeding their resources and endangering their well-being (Lazarus & Folkman, 1984). Thus, they may remain in a state of global stress for longer periods, not necessarily dependent on the objective quality of one event but rather on combinations of stressors, response behaviors, personal and contextual factors. General states of emotional and cognitive depletion may vary among individuals experiencing the same global situation and thus influence the state of global stress (Cobb, 1976;Cohen et al., 1983;Lazarus & Cohen, 1977;Palmwood & McBride, 2019;Steigen & Bergh, 2019). Based on the survey data from 41 countries, the perceived stress scores were found to be significantly higher among students, youths, women, and among those who expressed coronavirus concern and those who perceived increased susceptibility to the COVID-19 (Gamonal-Limcaoco et al., 2020). Social support is an important variable of the present study can be defined as "access to people to whom you can turn in a time of need" (Rohall et al., 2014, p. 230). Stress theorists Cohen & McKay, (1984) proposed that social support acts as a stress buffer, promotes health, and well-being by facilitating psychological resources under highly stressful circumstances. This stress buffer function of social support was supported by the findings of Dour et al., (2014), in which social support mediates symptoms of anxiety and depression in patients. Perceived support is typically explained as resulting from objectively supportive actions that buffer stress (Lakey & Orehek, 2011). A new approach to explain a link between perceived support and mental health, a relational regulation theory (RRT) of Lakey and Orehek, (2011), hypothesizes that the actual main effects occur when people regulate their effect, thoughts, and actions through ordinary yet effectively consequential conversations and shared activities, rather than through conversations about how to cope with stress. There are a number of studies on the relationship between perceived social support and psychological problems such as stress, anxiety, and depression (e.g., Awang et al., 2014;Bukhari & Afzal, 2017;Safree et al., 2010;Wang et al., 2014). In these researches, perceived social support was negatively associated with depression, anxiety, and stress. There was a strong negative relationship between perceived social support and psychological problem (e.g., Backs-Dermott et al., 2010;Pedersen et al., 2009). In some studies (e.g., Liu et al., 2020), anxiety and depression were negatively correlated with perceived social support, and mental health was positively correlated with perceived social support. Some common reactions to COVID-19 are concern about protecting oneself, concern that regular medical care or community services may be disrupted, fear of being socially isolated, guilt, and increased levels of distress due to some social stigma (Center for Community Practice, 2020). Social support can help people to reduce these concerns (e.g., stress, depression, anxiety, and isolation), as well as promote self-esteem and well-being, while a lack of social support has the opposite effect (Albrecht & Goldsmith, 2003). Social support has not only a direct impact on our health and well-being through the benefits of social relationships, but it also acts as a buffer against stressful circumstances and promotes coping mechanisms and quality of life (Gariepy et al., 2016). The positive perception of social support directly affects mental health, regardless of stress (Berkman & Glass, 2000). Though social support is inaccessible in some serious life events and crises, some forms of support are particularly important and extremely valuable (Hauken et al., 2015). As a protecting factor, social support has been shown to mitigate the negative impact of stress on individuals' physical and psychological health (Ni et al., 2015;Thoits, 2011) to increase understanding of different domains of resilience (Cohen, 1988). Social support exerted a full mediation effect on the relationship between life stress and anger (Jun et al., 2018). Many studies on perceived social support show that people who perceive adequate social support find fewer psychological consequences than those who perceive little or no support at all (e.g., Dunkley et al., 2000;Nezlek & Allen, 2006). Assessment on all domains of perceived social support (e.g., significant others, family, friends) indicates an association between social support and stress (Alnazly et al., 2021). Social support is not only associated with lower rates of stress in the present COVID-19 pandemic (e.g., Cao et al., 2020) but was also associated with lower rates of mental health problems before the COVID-19 pandemic (e.g., Chew et al., 2020). Social support was a significant moderating factor in several psychological studies conducted on the consequences of COVID-19 (e.g., Li et al. 2021;Liu et al., 2021). Moreover, it was a buffering as well as a protecting factor in the connection between COVID-19 concern and stress (Szkody et al., 2021). Thus, social support is regarded as the moderator in the relation between stressors and psychological outcomes (Romero et al., 2015) that can help to reduce the negative effects of stress on psychological adjustment in any psychological crisis situations including COVID-19 pandemics (e.g., Lee et al., 2014;Li et al., 2020;Ruthig et al., 2009;Schwarzer & Knoll, 2007). Aim and Hypothesis of the Study There are three aims in the present study. The first aim was to determine the levels of coronavirus concern, stress, and social support adopted by the Bangladeshi people in the COVID-19 pandemic situation. The second aim was to identify the relationships between stress, social support, and coronavirus concern. The third aim was to verify the moderating effect of social support on the relationship between coronavirus concern and life stress. Two hypotheses were formed to fulfill the third aim of the present study. First, it was hypothesized that perceived social support would be a predictor to stress. Second, perceived social support would have a moderating effect on the relationship between coronavirus concerns and stress. More specifically, it was predicted that people with higher levels of perceived social support would be less concerned with a corona in response to a stressor than people with lower levels of perceived social support. Participants In the present study, we utilized data from the COVIDiSTRESS global survey (Yamada et al., 2021)-an international collaborative initiative that gathers open data on people's psychological and behavioral responses during the COVID-19 pandemic from multiple countries. Although the data collection was completed on May 31, 2020, in our study, we used data from the first data extraction that includes responses collected between March 29 and April 19, 2020. There was a total of 412 Bangladeshi people participated in this survey. We excluded missing responses in the study variables and the sample size of this study was 204. A priori power calculation was utilized to assess the minimum sample size of the present study. With a statistical power of 0.80 to detect the small-sized correlation coefficient, a minimum of 194 respondents is required (https:// www. sample-size. net/ corre lation-sample-size/). Among respondents, 93 (45.6%) were female and 111 (54.4%) were male. The mean age was 28.17 (SD = 6.403) and ranged from 18 to 54 years. About the educational level, 1.5% held a PhD degree, 8.8% had a bachelor or master degree, 18.1% had some college, continuing education or equivalent, 22.5% had up to 12 years of schooling, 23.5% up to 9 years, 13.7% had less than 6 years of schooling, 10.8% none, and 1.5% missing. In terms of employment status, 48.5% were in fulltime employment, 5.4% were in part-time employment, 35.8% were students, 2% were self-employed, 7.8% were unemployed, and 0.5% were missing. In terms of marital status, 44.1% were married/cohabiting, 53.9% single, 0.5% divorced/widowed, and 1.5% others. Concern about COVID-19 Participants' concern about the consequences of COVID-19 was assessed by a Bangla translated questionnaire (Ahmed, 2020) originally developed by the COVI-DiSTRESS global survey (Yamada et al., 2021) team. It measures an individual's concern by asking questions like "To what degree are you concerned about the consequences of the COVID-19, "for yourself," "for your family," "for your close friends," "for your own country," and "for other countries." The responses were recorded on a 6-point Likert-type scale (1 = strongly disagree, 2 = disagree, 3 = slightly disagree, 4 = slightly agree, 5 = agree, and 6 = strongly agree). The possible range of score is between 5 and 30, where higher score is indicative of greater concern and vice versa. The Cronbach alpha for the present study was 0.84. Confirmatory factor analysis from the present study data suggested acceptable model fits of the COVID-19 concerns scale (χ 2 = 18.32, df = 4, p = 0.001, CFI = 0.97, TLI = 0.92, RMSEA = 0.13, SRMR = 0.04). Social Provision Scale (SPS) Participants' perceived social support was assessed through the 10-item Bangla version SPS (Ahmed, 2020) validated by Caron, (2013) based on the original 24-item SPS of Cutrona & Russell, (1987). The SPS-10 assesses five forms of social provisions: attachment (items 1 and 10), guidance (items 2 and 7), social integration (items 3 and 8), reliable alliance (items 4 and 6), and reassurance of worth (items 5 and 9). Each item is rated on a 4-point Likert-type scale (1 = strongly disagree, 2 = disagree, 3 = slightly disagree, 4 = slightly agree, 5 = agree, and 6 = strongly). A continuous scale score is computed by summing responses to the 10 questions, with values ranging from 10 to 60. The SPS-10 summary score is not computed for respondents with data missing on any items. Higher scores can be interpreted as having higher levels of social support. The coefficient alpha for the portion of the study was 0.89. Confirmatory factor analysis from the present study data suggested acceptable model fits of the social provision scale (χ 2 = 86.28, df = 23, p < 0.001, CFI = 0.94, TLI = 0.88, RMSEA = 0.12, SRMR = 0.06). Perceived Stress Scale (PSS-10) Participants' perceived stress level for the past month was assessed using the Bangla version (Islam, 2020) of the perceived stress scale (Cohen et al., 1983). PSS-10 is a 5-point 10-item Likert-type self-report measure (0 = never, 1 = almost never, 2 = sometimes, 3 = fairly often, 4 = very often). Individual scores on the PSS can range from 0 to 40, with higher scores indicating higher perceived stress. Scores ranging from 0 to 13 would be considered low stress, scores ranging from 14 to 26 would be considered moderate stress, and scores ranging from 27 to 40 would be considered high perceived stress. The reliability of the scale is reported as 0.84 (Taylor, 2015). In this study, PSS-10 had an acceptable internal consistency (Cronbach α = 0.80). Confirmatory factor analysis from the present study data suggested acceptable model fits of the perceived stress scale (χ 2 = 60.49, df = 34, p = 0.003, CFI = 0.94, TLI = 0.92, RMSEA = 0.06, SRMR = 0.05). Procedure Participants were recruited utilizing the snowball sampling technique. The survey was announced via social and traditional media, email groups, personal acquaintances, and other online means. Participation in the study was voluntarily and was not compensated. Participants received information on the aims of the study, confidentiality, and the right to withdraw at any phase of the survey. Information about demographics and survey questions were collected using Qualtrics survey soft-ware™. The survey took approximately 20 min. The validation process of translation-back translation procedures was implemented in countries where the measures of the study had no established language adaptations (Yamada et al., 2021). Ethics The COVIDiSTRESS global survey received a waiver to proceed from Aarhus University's Research Ethics Committee, and approval was granted post hoc on June 10, 2020 (2020-0066175). In compliance with General Data Protection Regulation standards, all data were anonymous. This survey was conducted in Bangladesh following the Declaration of Helsinki and its later amendments or comparable ethical standards. As it was an online survey, signed informed consent was not possible to take. After reading research objectives, confidentiality, and other related information, there was an option about whether participants agreed or not. If they clicked on I understand and agree to participate, they got access to the survey questionnaire. Data Analysis All statistical analyses were conducted using IBM SPSS Statistics (Version 20.0). Before proceeding with the analyses, data were screened for missing values, outliers, and normality. As mentioned earlier, we included observations that had no missing values in the study variables. The normality of the distribution was assessed through regression residuals. Regression residuals ranged between −2.77 and 2.50. The Kolmogorov-Smirnov and Shapiro-Wilk p-values of the residuals were 0.200 and 0.753, respectively. If there was no outlier and data were normally distributed, data were suitable for the parametric tests. Next, internal consistency reliability (Cronbach alpha) of the Corona Concern-5, PSS-10, and SPS-10 were assessed. Descriptive (e.g., mean, standard deviation, and correlation) and inferential (e.g., t-test, F-test, and hierarchical regression analysis) statistics were applied. Results The possible range, scale midpoint, actual range, mean, standard deviation, and coefficient of variation (CV) for the key variables are presented in Table 1 to assess the first objective. The figures in Table 1 showed that the mean concern about the consequences of COVID-19 was very high (M = 24.78) with low dispersion. Also, perceived social support was very high (M = 47.91) with low dispersion. However, perceived stress was moderate (M = 18.35) with moderate dispersion. To assess the association between study variables (the second objective), Pearson-product moment correlation coefficients were computed and the results are presented in Table 2. Table 2 shows the correlation coefficients for each pair of key variables. Perceived stress was moderately and negatively related to both age and social support. Concern about coronavirus was moderately and positively related to both social support and perceived stress. In order to test our prediction (the third objective), a hierarchical regression analysis was conducted. In step 1, we entered gender and age as covariates to control these two demographic variables' possible effects. In step 2, we entered COVID-19 concern and social support. Perceived stress was entered as the dependent variable. In step 3, product variables were entered to assess the possible interaction between concern and social support. It is evident in Table 3 that gender and age as covariates accounted for significant variance in stress where the effect of gender was though nonsignificant (R 2 = 0.093, F (2, 201) = 10.33, p < 0.001). Adding coronavirus concern and social support in the second step accounted for a significant variance in stress (R 2 = 0.248, F (4, 199) = 20.54, p < 0.001). For the last step, the interaction term (coronavirus concern × social support) was added. However, the model was not significant (R 2 = 0.257, F (5, 198) = 2.39, p > 0.05). The main effect of corona concern and the interaction effect was not significant. This nonsignificant result rejected the second hypothesis that social support moderated the association between corona concern and stress. However, the main effects of age and social support were significant (β = −0.221, p < 0.01; β = −0.840, p < 0.05). These results confirmed the first hypothesis that social support was a significant predictor of stress. Based on this, we ended up with the second model which shows 24.8% variance in stress can be explained jointly by age, concern, and social support (β = −0.224, p < 0.05; β = 0.317, p < 0.01; β = −0.315, p < 0.001). To visualize the role of social support in Table 3 Hierarchical regression assessing the effect of a continuous moderating variable (social support) on the concern-to-stress relationship where age and gender as covariates (N = 204) ***F change is significant at p < 0. perceived stress, we plotted the results using ModGraph (Jose, 2013). Figure 1 is a classic triangle pattern showing the fan effect on the left side. It shows that there is a very positive slope to the lines, which reflects the significant main effect of coronavirus concern on stress. Also, there is a moderate spread or separation of the lines, which signifies the main effect of social support on stress. However, the lines are essentially parallel, which indicates a nonsignificant interaction. It means the relationship between concern and stress did not differ by different levels of social support. Simple slope analyses as presented in Table 4 clearly demonstrated that all three lines are significantly different from zero with decreasing regression weights for high, medium, and low social support groups (β = 0.56, p < 0.001; β = 0.44, p < 0.001; β = 0.32, p < 0.05). In summary, participants had higher coronavirus concern, social support, and moderate perceived stress. Study variables had low but significant correlations between them. Age, coronavirus concern, and social support were significant predictors of stress that explained the 28.4% variability of stress. Social support didn't moderate (buffer) the association between coronavirus concern and stress. Discussion A number of studies suggested the psychological vulnerability among people during the current pandemic Wang et al., 2020). The present study examined whether social support would be a protective factor to reduce stress induced by the current pandemic. The data were collected during the early lockdown imposed by the Bangladesh government. The present study suggested participants had higher concerns regarding the COVID-19 outbreak in the country, and it strongly predicted higher perceived stress. This study revealed a negative association between social support and perceived stress during the current pandemic, where social support was a strong predictor rather than a protector of the stress along with concerns related to the COVID-19 outbreak in the country. In a recent study, Ahmed et al., (2021) have found that around 80% of participants were worried about COVID-19 infection. They suggested normative response to the COVID-19 outbreak in the country and neurotic personality traits as predictors for such higher COVID-19 concern. Bangladeshi people are experiencing for the first time such pandemic and related measures taken by the government. Therefore, it might be a reason for higher COVID-19 concerns. As a developing country, the health service system of Bangladesh is not sufficient to meet the treatment need of the large population of the country. Even with developed health facilities, developed countries like the USA, UK, etc., are facing serious trouble during the COVID-19 outbreak in these countries. Insufficient treatment facilities overflow of misleading information over Facebook (as Facebook is the most popular social media in Bangladesh) might raise concerns about COVID-19 infections. Several studies regarding COVID-19 impacts on mental health have suggested that symptoms of stress, post-traumatic stress disorder, anxiety, depression, etc., were increased during the COVID-19 pandemic than earlier Baculinao et al., 2020;Liu et al., 2020). Desclaux et al., (2017) suggested that people worry about their health during an epidemic outbreak, and this worry increases if they find any physical symptoms similar to the infection. However, social support becomes an important factor in a stressful situation. The result regarding the association between social support and stress is consistent with previous studies (e.g., Awang et al., 2014;Bukhari & Afzal 2017;Safree et al., 2010;Wang et al., 2014) that suggested a negative association between these two variables. However, regarding the role of social support, predictor or protector/buffer, this study suggested social support as a predictor of stress that supported earlier studies (Bell et al., 1982;Cohen et al., 1982;Frydman, 1981;Lin et al., 1979;Monroe, 1983;Williams et al., 1981). This finding did not support the stressbuffering model (Cohen & Wills, 1985) that social support is a protector against the adverse effect of stress. Some studies found that social support is a protector against self-isolation, social distancing, worry about coronavirus, etc. (Banerjee et al., 2020;Nelson et al., 2020). Szkody et al., (2021) reported that social support did not buffer (protector) the association between worry about COVID-19 and psychological health among college students in the USA during the early pandemic. Social support buffered only while the number of days in self-isolation was lower and worry about COVID-19 infection was higher. However, Lui et al., (2021) have found a "reverse buffer effect" of social support on the association between risk perception in COVID-19 and mental health symptoms. Similar to the present study and Szkody et al., (2021), which study was conducted early on the current pandemic in China. Differences in results about the role of social support across studies might be due to differences in cultures. Dryhurst et al., (2020) found differences in COVID-19 risk perception across countries due to socio-cultural differences among these countries. The Bangladesh Government implemented countrywide lockdown and stay-home orders to citizens from very early of the pandemic. During the lockdown, people stayed at home with family members and close others. The weak tie and strong tie theory (Granovetter, 1983) suggests that family members and intimate friends are strong ties and other people (i.e., colleagues, etc.) are weak ties. Strong ties provide supports like emotional and practical, whereas weak ties provide information support. People received more social support from their family members that had an impact on their perceived stress. From the authors' observations, several COVID-19 positive survivors have faced some unexpected problems like total isolation from their neighbors, having to leave the rented house when they get well, and even attacking survivors' houses. Even doctors, nurses, and police officers of civil administration face the same problem. This news has created a fear of loss of social support. Therefore, people having social support had lower perceived stress. People's degree of integration into a large community is an important factor for social support and stress relationships (Cohen & Wills, 1985). However, the scenario regarding COVID-19 concern and compliance with health instructions reduced largely. From the authors' observation, Bangladeshi people are now less worried about COVID-19 compared to what they were at the very beginning of the outbreak. As Bangladesh govt. has started the COVID-19 vaccination program, mass people are becoming reluctant to comply with govt. health directions. There is a total of 1,571,906 people who tested COVID-19 positive, and 27,907 people died on November 12, 2021 (WHO, 2021, November 14). The current rate of tested positive is below 5%. It seems that people are more concerned about their livelihood rather than COVID-19. They receive support from other people to do so as they see that other people are not also following and motivated to comply with govt. health instructions. This social support might be a factor to reduce stress due to COVID-19 as well as concern about it. Limitations and Future Directions The present study had several limitations. Firstly, there was no information about mental health information before the pandemic. An individual with poor mental health may have more coronavirus concerns and perceive less social support than an individual with sound mental health. Therefore, a longitudinal study can better explain the research question that was investigated in this study. Secondly, data of the present study was collected via online tools. So, responses were provided by only people who had devices and internet access and were also educated enough. Information regarding concerns about COVID-19, perceived stress, and social support of people having no education or lack of internet access was unknown in this study. Thirdly, as data came from educated people who updated the world's current situation, social normative concern about COVID-19 might affect the data. Therefore, some online data might be over judged or misleading. Fourthly, online data could be subjected to selection bias. We should be cautious about generalizing these findings to the overall Bangladeshi people. There is a contradiction among studies about the role of social support, whether predictor or protector, at the early of the current pandemic. Further exploratory studies would be designed to conclude the role of social support on the association between concern about coronavirus and mental health, including stress. These studies may consider the cross-cultural data that would help to explain the role of the culture. Studies would also be taken to understand the role of social support on the association between coronavirus concern and mental health variables at the current stage of the pandemic. There would be a possible suppression of social support during the data collection period as the lockdown was imposed during that period. This might shift the place of social support from the protector to the predictor. To determine the actual role of social support, further study would include data from the participants living in the same house, frequency of offline and online contacts, quality of the relationship, etc. Conclusions Due to the COVID-19 pandemic, the world becomes stagnant and causes elevated psychological problems. This study showed that COVID-19 concerns as a predictor of stress that result in psychological problems. During the pandemic, social support also impacted perceived stress as a predictor rather than a protector. Currently, Bangladeshi people are not much concerned about the COVID-19 infection as they are receiving more support from mass people to not worry about it. This may reduce stress-related to COVID-19 as well. The present study findings would be helpful to mental health practitioners to prepare and implement treatment and therapies for those exhausted by stress during this COVID-19 pandemic. They can design effective coping strategies to reduce stress by taking measures to mitigate coronavirus concerns and increase social support. Material Availability Data will be available on request. Besides, data is also available at the following link -https:// osf. io/ z39us Code Availability NA Author Contribution MU, MI, and OA designed this study from the COVIDiStress global survey data. MI prepared the introduction section, MU prepared the methods and results section, and OA prepared the discussion section.
2022-03-03T16:22:33.753Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "d5505ca61f51766507c1fe94133474cb17ad436a", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s43076-022-00158-7.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "87f840e759df332ed4d8080fc26899ad1911c86f", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
250138804
pes2o/s2orc
v3-fos-license
Indian Journal of Research in Homoeopathy Indian Journal of Research in Homoeopathy The usefulness of homoeopathic medicines for infertility – A case The usefulness of homoeopathic medicines for infertility – A case series series Abstract Introduction: Infertility is the inability to achieve a successful pregnancy within 2 years of regular unprotected sexual intercourse. About 8–12% of couples of reproductive age experience infertility worldwide. Infertility may result from any underlying pathology or unexplained causes and can cause severe emotional disturbances in both partners. The complexity and cost of conventional treatment may not be affordable for a majority of people. Case Summary: Three cases of infertility with an underlying pathology successfully treated with standalone homoeopathic treatment are reported. These cases presented with a structural deformity as a cause of infertility. The patients partners were also given homoeopathic medicines in all the cases. The first case showed a long liquefaction time on semen analysis and the female partner had a unilateral tubal block. The second case investigation reported ipsilateral varicocele and small-sized testes with oligospermia. In the third case, the female had polycystic ovarian syndrome with a sub-septate uterus and multinodular goitre. All three cases were treated with individualised homoeopathic medicine. All these cases were followed up regularly and they conceived within 6 months of treatment. Abstract Infertility is defined as an inability to achieve a successful pregnancy within 2 years of regular unprotected sexual intercourse. [1] Infertility affects about 8–12% of couples of their reproductive age globally. [2] The overall prevalence of primary infertility in women of reproductive-age group is 8.9% in the urban population of Central India. [3] Various factors such as marriage above the age of 25 years, employed women, nuclear family, family history of infertility, obesity, irregular menstruation pattern and depression stress have a significant association with infertility. [3] Despite the availability of various treatment procedures, the cost of these procedures is not affordable to many and is not significantly associated with successful pregnancy. [4] Especially, the assisted conception methods make it unsuitable for most of the population. worldwide. Infertility may result from any underlying pathology or unexplained causes and can cause severe emotional disturbances in both partners. The complexity and cost of conventional treatment may not be affordable for a majority of people. Case Summary : Three cases of infertility with an underlying pathology successfully treated with standalone homoeopathic treatment are reported. These cases presented with a structural deformity as a cause of infertility. The patients partners were also given homoeopathic medicines in all the cases. The first case showed a long liquefaction time on semen analysis and the female partner had a unilateral tubal block. The second case investigation reported ipsilateral varicocele and small-sized testes with oligospermia. In the third case, the female had polycystic ovarian syndrome with a sub-septate uterus and multinodular goitre. All three cases were treated with individualised homoeopathic medicine. All these cases were followed up regularly and they conceived within 6 months of treatment. The Abstract Case Report introduction Infertility is defined as an inability to achieve a successful pregnancy within 2 years of regular unprotected sexual intercourse. [1] Infertility affects about 8-12% of couples of their reproductive age globally. [2] The overall prevalence of primary infertility in women of reproductive-age group is 8.9% in the urban population of Central India. [3] Various factors such as marriage above the age of 25 years, employed women, nuclear family, family history of infertility, obesity, irregular menstruation pattern and depression stress have a significant association with infertility. [3] Despite the availability of various treatment procedures, the cost of these procedures is not affordable to many and is not significantly associated with successful pregnancy. [4] Especially, the assisted conception methods make it unsuitable for most of the population. Infertility can cause a stressful condition for couples, as the impact is long term. It affects the individual's perspective of themselves, their life and their relationship. [5] Three cases are presented here, all of them were suffering from infertility for about 2 years and they took conventional treatment for more than 1 year without any positive results. These cases presented with structural deformity preventing fertility and demonstrated improvement with homoeopathic treatment. Both the partners were taken into consideration for homoeopathic treatment. Case 1 A young couple presented at the outpatient department for the treatment of the inability to conceive after 2 years of regular unprotected sexual intercourse. The 29-year-old male partner was normal. However, his semen analysis showed a delay in liquefaction time. He had a history of chickenpox at 10 years of age and developed joint pain by the age of 28. His thermal reaction was hot and he had a desire for sweets. The 24-year-old female partner reported a doubtful free flow in the tubal patency test. She had also been diagnosed with cysts in her ovaries for which she had taken conventional treatment. She had a history of chickenpox at 9 years of age and took folk medicine. Thermally, she was hot and had a desire for spicy things. She used to weep after anger and had anxiety about trifles. She also complained of increased leucorrhoea before menses which was thick, white, sticky and offensive, along with itching of the vulva and groin. Her menstrual discharge was clotted most of the time. Furthermore, she suffered from pain on the left side of the forehead as well as pain in the back before menses. Treatment history In case, taking no characteristic symptoms were elicited from the patient. In reference to the rubric 'Male Sterility' in synthetic repertory, the homoeopathic medicine X-ray 30C was prescribed for the male partner. The female was first prescribed the anti-sycotic medicine, Thuja occidentalis 30C, based on predominant sycotic miasm in the case before prescribing the indicated medicine Natrum muriaticum 200C for the totality of symptoms of the case [ Figure 1]. The female partner had a doubtful free flow tubal blockage, Thiosinaminum 30C was thus prescribed, based on its dissolving pathophysiological action. The couple conceived after 3 months of homoeopathic treatment. A detailed follow-up is given in Table 1. The case was followed up every month for the continuation of the pregnancy till the birth of the baby. The course of pregnancy was uneventful. The Modified Naranjo Criteria score is mentioned in Table 2. [6] Case 2 Another couple presented for the treatment of primary infertility for 2 years. The 26-year-old male was suffering from oligospermia, small-sized left testis and varicocele. He had increased sexual desire with a decreased ability and became tired immediately after coition. A history of chickenpox was there at 15 years of age and was treated with allopathic medicines. An episode of recurrent attack of fever was there 6 months back. His mother had diabetes mellitus and his father had hypertension. Physical generals included thirst for large quantities of water, increased sweat and occasionally painful urination with itching of the penis. Thermally, he was chilly. He had a desire for meat, non-vegetarian foods and spicy things. He had an intolerance to crabs and dates which caused vomiting, and ice cream and sweets caused numbness of the head. He was punctual and quick tempered. The 23-year-old female partner presented with irregular menses for 1½ years. She had a 32-45-day cycle, with 3 days of flow, associated with pain in the right side of the lower abdomen on the 2 nd day of menses, and headache 1 week before menses. She was suffering from polycystic ovarian syndrome (PCOS) at the time of consultation. She also complained of dyspareunia and dryness of the vagina during coition. She had taken hormone therapy for 2 months. She had a history of sinusitis and urticarial eruption 1 year ago. Mental generals include weeping easily, stage fright, miserly and irritability before menses. She had reduced thirst with a preference for icy cold water and was constipated. Her thermal reaction was hot. She had a desire for meat, an aversion to milk and an intolerance to shellfish which caused vomiting and abdominal pain. Further, she suffered from leucorrhoea, curd like in appearance during urination which aggravated after travelling and after coitus associated with itching in the vagina. Acne with itching was present for 6 months following hormone therapy for PCOS. Treatment history Phosphorus was prescribed for the male partner based on the totality of symptoms.Although the patient improved in general, the weakness of the back after coition and increased sexual desire with an inability to perform persisted. These symptoms were covered by Selenium metallicum in the repertory chart. Hence, he was treated with a constitutional medicine Phosphorous 200C for 3 months, followed by Selenium metallicum 200C [ Table 3]. The female partner was prescribed Natrum muriaticum 200C based on the totality of symptoms. The couple conceived after 6 months of homoeopathic treatment. The repertory chart is represented in Figures 2 and 3. Modified Naranjo Criteria are mentioned in Table 2. Case 3 A 31-year-old female suffered from primary infertility with inability to conceive for 2 years. She had complaints of irregular menses for 1 year, dryness of the vagina and reduced sexual desire. Since puberty, her menses were delayed by 3-4 days and later on by 10 days. A sensation of bloating of the body was present before menses which was relieved after menses. On investigation, she was diagnosed with PCOS and sub-septate uterus. She had earlier taken homoeopathic treatment for irregular menses. Furthermore, she had a history of multinodular goitre during puberty, which was treated with Severe pain in the left side of forehead Repertorisation in Figure 1 shows the detailed symptoms of baseline consultation Baseline consultation Repertorisation in Figure 3 shows the detailed symptoms of baseline Consultation homoeopathic medicine. She also had a history of the left-sided sinusitis. Her father had diabetes mellitus and her mother had hypertension. She used to weep easily and consolation aggravated all her problems. She was irritable with sadness and a weeping tendency before menses. Thermally, she was chilly. Her stools were hard, and she had a strong craving for pickles. Moreover, she suffered from left-sided sciatica and pain on the right side of the hypogastrium. No abnormalities were detected in the male partner. Treatment history The treatment started with Natrum muriaticum 200C for 3 months, and later, medicine was changed to Sepia officinalis 200C in reference to the rubric, external throat, goitre right sided in Kent repertory, and she conceived after 6 months of homoeopathic treatment. A detailed followup is given in Table 4. The repertory chart is represented in Figure 4. Modified Naranjo Criteria are mentioned in Table 2. Figure 4 shows the detailed symptoms of baseline consultation Infertility is a major problem for a large group of the population worldwide for which wide varieties of conventional treatment options are available. Despite all these treatment modes, many of them remain sterile. [7] Some cases suffer from infertility due to certain pathologies, while a number of them have unknown causes. [8] Homoeopathic medicines have shown their usefulness in the treatment of infertility cases in pathologically advanced as well as those from unknown aetiology over decades. [9][10][11] All the three cases reported here, initiated treatment early, as they had a known pathology. They preferred homoeopathic treatment as a final step due to the inability to conceive. A prolonged liquefaction time is a possible cause of infertility. [12] In the first case, a male partner was suffering from prolonged liquefaction time and an X-ray 30C was prescribed. Since in subsequent visit, the female partner's urine pregnancy test was positive, so they did not report the husband's semen analysis report. Homoeopathic medicines are useful in treating cases of tubal blockage. A case report showed the usefulness of individualised homoeopathic medicine, Thuja occidentalis based on miasm and Thiosinaminum as a specific medicine in the treatment of tubal block. [13] In the case reported here, the female partner was suffering from a unilateral tubal block and the same medicines were found to provide a positive result. A retrospective matched control study revealed that uterine anomalies such as septate, sub-septate and arcuate uterus decrease pregnancy and live birth rates in in vitro fertilisation/ intracytoplasmic sperm injection. [14] In these three cases with structural changes, functional symptoms preceding structural changes are considered for the totality of symptoms and prescription. Homoeopathy can treat cases with an underlying pathology or even when there is no apparent aetiology. [15] This case series focuses on the individual and not solely on the disease, as per the homoeopathic principles of treating a man in disease and not a disease in man. In all these three cases, the indicated common remedy was Natrum mur. It was also found to be useful for common symptoms such as weeping easily, irritability with sadness and weeping tendency before menses, irritability before menses, consolation aggravates, dryness of skin, irregular menses, menses too late, dysmenorrhoea and headache before menses. Dyspareunia and dryness of the vagina during coition were also prominent symptoms. Reduced sexual desire and itching on the external parts were the associated complaints. [16] It is ideal to treat both partners for infertility. Proper mental health support and individualised homoeopathic medicines are ideal to establish the family's well-being. [17] Individualised homoeopathic treatment can contribute to the management of infertility. [18] This case series is limited to three cases; a study with a large sample size is warranted for further validation of the results. concluSion Individualised homoeopathic treatment is found to be useful in the treatment of infertility. Even in the above-mentioned three cases with an underlying pathology, homoeopathic medicines were able to give positive results. It can be suggested that well-designed studies with a larger sample size could draw conclusive results. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. The patients had given their consent for the images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
2022-07-06T15:05:00.697Z
0001-01-01T00:00:00.000
{ "year": 2022, "sha1": "9863e8fcde3f743ca906db7c555c0ab7bdf06e9f", "oa_license": "CCBYNCSA", "oa_url": "https://ijrh.researchcommons.org/cgi/viewcontent.cgi?article=1048&context=journal", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9863e8fcde3f743ca906db7c555c0ab7bdf06e9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
131773888
pes2o/s2orc
v3-fos-license
SWNet: Small-World Neural Networks and Rapid Convergence Training large and highly accurate deep learning (DL) models is computationally costly. This cost is in great part due to the excessive number of trained parameters, which are well-known to be redundant and compressible for the execution phase. This paper proposes a novel transformation which changes the topology of the DL architecture such that it reaches an optimal cross-layer connectivity. This transformation leverages our important observation that for a set level of accuracy, convergence is fastest when network topology reaches the boundary of a Small-World Network. Small-world graphs are known to possess a specific connectivity structure that enables enhanced signal propagation among nodes. Our small-world models, called SWNets, provide several intriguing benefits: they facilitate data (gradient) flow within the network, enable feature-map reuse by adding long-range connections and accommodate various network architectures/datasets. Compared to densely connected networks (e.g., DenseNets), SWNets require a substantially fewer number of training parameters while maintaining a similar level of classification accuracy. We evaluate our networks on various DL model architectures and image classification datasets, namely, CIFAR10, CIFAR100, and ILSVRC (ImageNet). Our experiments demonstrate an average of ~2.1x improvement in convergence speed to the desired accuracy Introduction Deep learning models are increasingly popular for various learning tasks, particularly in visual computing applications. A big advantage for DL is that it can automatically learn the relevant features by computing on a large corpus of data, thus, eliminating the need for hand-selection of features common in traditional methods. In the contemporary big data realm, visual datasets are increasingly growing in size and variety. For instance, the ILSVRC challenge dataset has 22K classes with over 14M images [25]. To increase inference accuracy on such challenging datasets, DL models are evolving towards higher complexity architectures. State-of-the-art models tend to reach good accuracy, but they suffer from a dramatically high training cost. As DL models grow deeper and more complex, the large number of stacked layers gives rise to a variety of problems, e.g., vanishing gradients [7,3], which renders the models hard to train. To facilitate convergence and enhance the gradient flow for deeper models, creation of bypass connections was recently suggested. These shortcuts connect the layers that would otherwise be disconnected in a traditional Convolutional Neural Network (CNN) [28,9,12,35]. To curtail the cost of hand-crafted DL architecture exploration, the existing literature typically realizes the shortcuts by replicating the same building block throughout the network [9,12,35]. However, such repeated pattern of blocks in these networks induces unnecessary redundancies [11] that increase the computational overhead. This paper proposes a novel methodology that transforms the topology of conventional CNNs such that they reach optimal cross-layer connectivity. This transformation is based on our observation that the pertinent connectivity pattern highly impacts training speed and convergence. To ensure computational efficiency, our architectural modification takes place prior to training. Thus, the incorporated connectivity measure must be independent of network gradients/loss and training data. Towards this goal, we view CNNs as graphs and revisit Small-World Networks (SWNs) [34] from graph theory to transform CNNs into highly-connected small-world topologies. Watts-Strogatz SWNs [34] are widely used in the analysis of complex graphs; Due to SWNs' specific connection pattern, these structures provide theoretical guarantees for considerably decreased consensus times [23,32,36]. Our network modification algorithm takes as input a conventional CNN architecture and enforces the small-world property on its topology to generate a new network, , called SWNet. We leverage a quantitative metric for smallworldness and devise a customized rewiring algorithm. Our algorithm restructures the inter-layer connections in the input CNN to find a topology that balances regularity and randomness, which is the key characteristic of SWNs [34]. Small-world property in CNNs translates to an architecture where all layers are interlinked via sparse connections. An example of such network is shown in Fig. 1. SWNets have similar quality of prediction and number of trainable parameters as their baseline feed-forward architectures, but due to the added sparse links and the optimal SWN connectivity, they warrant better data flow. In summary, our architecture modification has three main properties: (i) It removes non-critical connections and reduces computational implications. (ii) It increases the degrees of freedom during training, allowing faster convergence. (iii) It provides customized data paths in the model for better cross-layer information propagation. We conduct comprehensive experiments on various network architectures and showcase SWNets' performance on popular image classification benchmarks including CI-FAR10, CIFAR100, and ImageNet. Our small-world CNNs achieve an average of 2.1-fold improvement in training iterations required to achieve comparable classification accuracy as the baseline models. We further compare SWNet with the state-of-the-art DenseNet model and show that with 10× fewer parameters, SWNets demonstrate identical performance during training. Related work Bypass Connections. A substantial amount of research has focused on the addition of bypass connections to the hierarchical CNN architecture to enhance inter-layer information flow and enable feature reuse. Authors of [28] implement the bypass connections using parametrized (gated) interlinks to enable model fine-tuning. In order to avert the burst in the number of trainable parameters caused by such gated connections, ResNets [9] use identity links (skip connections) to connect the concatenated layers. Such skip connections follow a modular structure. There exist a significant amount of redundancy in (deep) ResNets as alternative inter-layer connections may exist that render higher accuracy while having lower model complexity; as shown by [13], not all identity links are necessary. A variation of ResNets that uses wider residual blocks is introduced in [33,35] to further improve image classification accuracy, while the effects of such architectural modification on the convergence speed and training over- head still need a more comprehensive study. Inception networks [31] are another example of benefiting from wider networks. Authors of [30] show that addition of residual connections to the initially proposed inception architecture drastically increases model convergence speed. This work further motivated us to study CNN convergence gains by addition of bypass connections. DenseNets [12] group CNN layers in blocks with each layer connected to all its preceding layers. This is done by concatenating previous layers' feature-maps and using it as the input. Another work [11] argues that such dense connectivity pattern incurs redundancies since earlier features might not be required in later layers. The authors propose to prune such redundancies to generate a more efficient architecture for CNN inference phase. However, the paper does not explore the possible effects of pruning on training. In summary, the prior work mainly focuses on accuracy gains of long-range connections with little attention to the training overhead induced by the introduction of redundant parameters. In contrast to prior art, we select only to add long-range connections that are key contributors to model accuracy as well as convergence speed. To the best of our knowledge, SWNet is the first work to intertwine the smallworld property with CNNs and to examine the trained network in terms of convergence speed and accuracy. To further highlight the distinction between our work and prior art, Fig. 2 illustrates the connection patterns in a ResNet, DenseNet, and SWNet architecture. In contrast to these two models, SWNet is not structured upon fixed building blocks and therefore can adapt to any given net-work architecture. Different from DenseNets which only accommodate fully dense connections, SWNet leverages customized sparse convolutions. Such sparsities enable selective connectivity between pairs of layers that enhance convergence speed while ensuring a low redundancy. Small-wold Network. Perhaps the first investigation of SWNs in the context of deep learning was performed in [6], where the authors transform simple MLPs to SWN graphs and study the accuracy benefits for diagnosis of diabetes. SWNet substantially differs from this work as our solution is applicable to convolutional neural networks and uses a different mathematical model and small-worldness metric. Background: Small-World Networks Watts and Strogatz [34] observed that real-world complex networks, e.g., the anatomical connections in the brain and the neural network of animals, cannot be modeled using the existing regular or random graph classes. As such, they introduced the new category of small-world networks. Members of the small-world class have two main characteristics: 1) They have a small average pairwise-distance between graph nodes. 2) Nodes within the graph exhibit a relatively high (local) clustered structure. The first property is mainly associated with random graphs while the second property is prominent in regular graph classes. Such networks have shown significant enhancement in signal propagation speed, consensus, synchronization, and computational capability [29,19,2,18,36]. Randomness is introduced into a regular graph structure by iterative removal and addition of edges with probability, p, in order to construct an SWN. Fig. 3 demonstrates the transition between a regular structure and its corresponding random graph as the rewiring probability increases from 0 to 1. Intermediate values of p interpolate between complete regularity and randomness to generate an SWN. SWNet: Small-World CNNs We propose to restructure the inter-layer connections in a DL model such that its topology falls into the small-world category while the total number of parameters in the network is held constant. Throughout the paper, we use the terms DL model and CNN interchangeably but emphasize that our approach is easily applicable to models without convolutions, e.g., Multi-Layer Perceptrons (MLPs). In the following, we first elaborate on the small-world criteria and introduce methods to distinguish SWNs from other topologies (Sec. 3.1). We then explain our conversion of an arbitrary CNN into its equivalent SWN (Sec. 3.2). Lastly, we delineate our implementation and formalize the operations performed in a SWNet (Sec. 3.3). Metric for Small-Worldness To examine the small-world property for a given graph, we study two properties, namely, the characteristic path length (L) and the global clustering coefficient (C). L is defined as the average distance between pairs of nodes in the graph and C is a measure for the density of connections between neighbors of any node in the network. A completely random graph lacks clustering but enjoys a small L. By definition, a graph is small-world if it has a similar L but higher C than an Erdös − Re nyi (ER) random graph [37] constructed using the same number of vertices and edges. Let us denote the clustering coefficient and the characteristic path length of a given graph (G) by C G and L G , respectively. In a similar fashion, we represent the corresponding characteristics of the equivalent ER random graph by C rand , L rand . We use a quantitative measure of the smallworld property form [14] which categorizes a network as a SWN if S G > 1 and S G is calculated using Eq. (1). Graph Generation In order to modify a given CNN architecture and generate the equivalent SWN, we first model all connections within the network as a graph representation. In this context, a connection is defined as a linear operation performed between an input element and a trainable weight (network parameter) found in Convolution (Conv) and Fully-Connected (F C) layers. For Conv layers, each feature-map channel is represented by a node and each edge represents a k × k kernel. For F C layers each neuron is assigned a separate node and the edges correspond to weight matrix elements. Architecture Search After generating the graph pertinent to the input CNN architecture, we proceed to find the equivalent SWN. To perform this task, the initial graph is randomly rewired with different probabilities, p ∈ [0, 1], similar to Fig. 3. For each rewired graph, we compute the characteristic path length L and clustering coefficient C and use the captured pattern for each criterion to detect the small-world topology using the small-worldness measure defined in Sec. 3.1. where v i and v j are the start and end nodes. To perform random rewiring with probability p, we visit all edges in the graph once. Each edge is rewired with probability p or kept the same with probability 1 − p. If the edge is to be rewired, a new second node v j is randomly sampled from the set of nodes that are non-neighbor to the edge's start node, v i . This second node is selected such that no selfloops or repeated links exist in the rewired graph. Once the destination node is chosen, the initial edge, e(v i , v j ) is removed and replaced by e(v i , v j ). Fig. 4 demonstrates our rewiring mechanism. Note that our rewiring methodology does not alter the number of connections in the CNN. As a result, the total number of trainable parameters in the SWN model equals that of the original network. Network Profiling. Using the aforementioned rewiring policy, we generate various graphs by sweeping the rewiring probability in the [0,1] interval. Fig. 5 demonstrates the correlation between C and L as the rewiring probability is changed for a 14-layer CNN model. For conventional CNNs, the clustering coefficient is zero and the characteristic path length can be quite large specifically for very deep networks (leftmost points on Fig. 5). As such, CNNs are far from networks with the small-world property. Random rewiring replaces short-range connections to immediately subsequent layers with longer-range connections. Consequently, L is reduced while C increases as the network shifts towards the small-world equivalent. We select the topology with the maximum value of small-world property, S G , as the SWNet. As a direct result of such architectural modification, the new network enjoys enhanced connectivity in the corresponding CNN which results in better gradient propagation and training speedup. To demonstrate the efficiency of the SWN versus other configurations generated during the probability sweep, we train several rewired networks on the MNIST dataset [20], each of which is constructed from a 5-layer CNN. Fig. 6 demonstrates the convergence speed of these various architectures versus rewiring probability used to generate them from the baseline model. Due to the addition of long-range connections, almost all models show convergence improvements over the baseline. However, the perfect balance between node clustering and average path length is achieved for the SWN. This, in turn, renders the fastest convergence. Figure 6: Convergence speed of a 5-layer CNN and its randomly rewired counterparts. All values are normalized by baseline convergence rate. SWN is shown with a red star. SWNet Methodology CNN Formulation. Conventional CNNs are comprised of subsequent layers where each layer, l, in the network performs a combination of linear and nonlinear operations on its input, x l , to generate the corresponding output, y l . We denote core linear operations (Conv and F C) in a CNN by W l (·) with the subscript representing the layer index. Other operations can take the form of Batch Normalization (BN ) [15], Rectified Linear Unit (ReLU ) [8], and Pooling [21]. For each linear layer, we bundle one or more of such operations together and show them as one composite function, C l (.). For an arbitrary layer l in a conventional CNN, the output is formalized as: Note that the cascaded nature of CNNs implies that the generated output from one layer serves as the input to the immediately succeeding layer: x l+1 = y l . Sparse Connections in SWNets. One major difference between SWNets and conventional CNNs is that SWNet layers can be interconnected regardless of their position in the network hierarchy. More specifically, the output of each layer of a SWNet is connected to all its succeeding layers via sparse weight tensors. These connections are implemented via convolution kernels with coarse-grained sparsity patterns. Fig. 7 shows the convolution filters of an example sparse connection from a layer with 5 output channels to a layer with 3 output channels and its small-world graph representation. Let us denote sparse connections from layer l1 to layer l2 by W s l1l2 (.). The output of the l-th layer in SWNet can then be calculated as: Comparing the above formulation with Eq. 2, we highlight the extra summation term that accounts for the inter-layer connections. Note that in Eq. 3, both W s l and W s l1 l are sparse tensors. The inter-layer connectivity in SWNet enables enhanced data flow, both in inference and training stages, while the sparse connections mitigate unnecessary parameter utilization. In contrast to the previously proposed feature concatenation methodology [12], we perform summation over the feature-maps. By means of this approach, we mitigate the appearance of extremely high dimensional kernels that result from channel-wise feature-map concatenation. Furthermore, the summation of feature-maps enables SWNet to be applicable to all network architectures with various layer configurations. Composite Non-linear Operation. In contrast to DenseNets [12] and ResNets [9] where several linear layers are concatenated before pooling is performed, SWNets support pooling immediately after each Conv layer as seen in conventional CNN architectures. We experiment with various configurations of the widely-used non-linear operations, i.e., BN, ReLU, Maxpool to investigate the effect of ordering on network convergence. Our experiments demonstrate that SWNet convergence is enhanced when the composite non-linear function, C l is implemented as a ReLU, followed by Maxpooling, and BN as shown in Fig. 2. Figure 7: Coarse-grained sparse convolution between a layer with ch 1 = 5 output channels and a layer with ch 2 = 3 output channels. Left: Sparse convolution weights. For each removed connection from the graph, the corresponding filter in the sparse convolution weight is masked to zero. Right: Equivalent graph with nodes representing channels. Experiments We conduct proof-of-concept experiments on different network architectures and image classification benchmarks to empirically demonstrate the enhanced convergence speed of SWNets compared to the baseline (conventional) counterparts. Our implementations are available in popular neural network development APIs, Keras [4] and PyTorch [24]. Datasets CIFAR. We carry out our experiments on the two available CIFAR [16] datasets. CIFAR10 (C10) and CI-FAR100 (C100) benchmarks consist of colored images with dimensionality 32 × 32 that are categorized in 10 and 100 classes, respectively. Each dataset contains 50,000 samples for training and 10,000 samples for testing. We use standard data augmentation routines popular in prior work [9,13]. The samples are normalized using per-channel mean and standard deviation. At training time, random horizontal mirroring, shifting, and slight rotation are also applied. ImageNet. The ISLVRC-2012 dataset, widely known as the ImageNet [5], consists of 1000 different classes of colored images with 1.2 million samples for training and 50,000 samples for validation. We use the augmentation scheme proposed in [26,10] to preprocess input samples. During training, we resize the images by randomly sampling the shorter edge from [256,480]. A 224 × 224 crop is then randomly sampled from the image. We also perform perchannel normalization as well as horizontal mirroring [17]. Benchmarked Architectures Tab. 1 encloses our baseline CNN architectures. SWNets maintain the same feed-forward architecture as the baseline networks and are constructed by 1) replacing the original Conv layers with sparse convolutions and 2) implementing additional sparse convolutions between non-consecutive layers. To match the dimensionality of inter-layer connected feature-maps, we tune the stride in the long-range sparse connections and use zero-padding where necessary 1 . This approach enables us to control the dimensionality of the produced feature-maps as well as tune the impact of added long-range connections. ConvNet-C Training. We train the ConvNet-C [26] model on C10 and C100 benchmarks with a batch size of 128. To prevent overfitting, dropout layers with a rate of 0.4 are added after BN layers with no M axP ool, and a rate of 0.5 before the first F C layer. The small-world model is constructed using the same configuration of layers as the baseline, including the dropout layers. We use Stochastic-Gradient-Descent (SGD) optimizer with Nesterov, 0.9 momentum, and a 5e − 4 weight decay. Models are trained for 2e + 4 and 3e + 4 iterations on C10 and C100, respectively. The initial learning rate is set to 0.01 for both datasets and learning rate is decayed by 0.5 upon optimization plateau. Convergence. Fig. 8-(a) illustrates the test error and training loss of the baseline and SWNets as two representatives of the convergence speed. Similarly, for C100 benchmark, the corresponding convergence curve is presented in Fig. 8-(b). While these figures qualitatively demonstrate the effectiveness of our methodology, we provide a quantitative measure for a solid comparison between SWNet and the baseline. We investigate several points corresponding to various test accuracies and compare the two models' convergence time to these points. Tab. 2 summarizes the per-accuracy speed-up of SWNet over the baseline model. As seen, the speed-up varies for different accuracies, however, for all test accuracies, SWNet requires a substantially fewer number of iterations for convergence. At final saturation point (marked by on Fig. 8), both models achieve comparable accuracies while SWNet enjoys a 2.64× and 2.82× reduction in convergence time for C10 and C100 datasets, respectively. DenseNet DenseNets [12] achieve state-of-the-art accuracy by connecting all neurons from different layers of a dense block with trainable (dense) parameters. Such dense connectivity pattern results in high redundancy in the parameter space and causes extra overhead on training. We show that a SWNet with only sparse connections and much fewer parameters achieves similar results as DenseNet. Training. We train a DenseNet model with 40 layers and k = 12 (Tab. 1) on C10 dataset. The equivalent SWNet is constructed by removing all long-range dense connection from the architecture and rewiring the remaining shortrange edges such that each dense block transitions into a small-world structure. The SWNet maintains the same number of layers while the inter-layer connections are implemented using sparse convolution kernels, thus incurring substantially fewer number of trainable parameters. We use the publicly available PyTorch implementation for DenseNets 2 and replace the model with our small-world network. Same training scheme as explained in the original DenseNet paper [12] is used: models are trained for 19200 iterations with a batch size of 64. Initial learning rate is 0.1 and decays by 10 at 1 2 and 3 4 of the total training iterations. Convergence. Fig. 9 demonstrates the test accuracy of the models versus the number of epochs. As can be seen, although SWNet has much fewer parameters, both models achieve comparable validation accuracy while showing identical convergence speed. We report the computational complexity (FLOPs) of the models as the total number of multiplications performed during a forward propagation through the network. Tab. 3 compares the benchmarked DenseNet and SWNet in terms of FLOPs and number of trainable weights in Conv and F C layers. We highlight that SWNet achieves comparable test accuracy while having 10× reduction in parameter space size. Figure 9: Training loss and testing accuracy of the 40-layer (k=12) DenseNet [12] with 1M parameters and our corresponding SWNet with less than 100K parameters. . In order to mitigate overfitting, we add dropout layers with probability 0.5 after each F C layer (except the last). Loss minimization is performed by means of SGD with Nesterov [22] and a 0.9 momentum. We set the batch size to 64 for both models and incorporate an exponential decay for the learning rate: initial learning rate is set to 2.5e−3 and the decay factor is 0.99999875 [27]. Convergence. Fig. 10 ResNet Training. We adopt the training scheme in the original ResNet paper [9]. To build the SWNet, we first remove all shortcut and bottleneck connections from the model. We then rewire the connections in the acquired plain network such that it becomes small-world. No dropout is used for the baseline and SWNets. Batch size is set to 128 and we use SGD with 0.9 momentum and weight decay of 1e − 4. Initial learning rate is set to 0.1 and decays by 0.1 when the accuracy plateaus. We train the models for 9e + 5 iterations and report single-crop accuracies. Convergence. Test error and training loss for baseline ResNet and the SWNet are shown in Fig. 11. As seen, SWNet achieves both higher accuracy and higher convergence speed throughout training. For a more quantitative comparison, we enclose point-wise speed-ups for various iterations and test errors in Tab. 5. As evident from the results, systematic restructuring of long edges in SWNet allows for a better convergence speed compared to the replicated blocks in the baseline ResNet. Figure 11: Test error and training loss across training iterations for ResNet-18 on ImageNet dataset. Convergence to minimum error rate is shown with a marker. Discussion on Long-range Connections The selected small-world structure for a given CNN has two main characteristics, namely high clustering of nodes and small average path length between neurons across layers. We postulate that such qualities render the SWN desirable during training due to the enhanced information flow paths existent in these efficiently-connected networks. To examine our hypothesis, we visualize the weights connecting different layers of the trained SWNet for C10, C100 (ConvNet-C), and ImageNet (AlexNet) benchmarks. Fig. 12 presents a heat map of the average absolute values of weights connecting each pair of Conv layers. Each square at position (l 1 , l 2 ) of the heatmap represents the strength of the connections between layers l 1 and l 2 where l 0 denotes network input. Color shades of orange, red and maroon indicate strong inter-layer dependency while the white color indicates that no connections are present be- Figure 12: Visualization of average absolute value of trained weights within Conv layers of a SWNet. Colors encode the connectivity strength between layers with red being the strongest and white denoting no connection. The marked rows with black box borders correspond to the input layer of the networks. tween the corresponding layers in SWNet. We summarize our observations based on the heat map as the following: 1. Each layer has strong connections to its nonsubsequent layers indicating that long-range edges established in SWNet are crucial to performance. 2. The input layer has spread weights across all layers of the network which demonstrates the importance of connections between earlier and deeper layers. 3. SWNet preserves the strong connections between one layer and the immediately proceeding layer, thus, maintaining the conventional CNN data flow. Conclusion We propose a novel methodology that adaptively modifies conventional feed-forward DL models to new architectures, called SWNet, that fall into the category of smallworld networks-a class of complex graphs used to study real-world models such as human brain and the neural networks of animals. By leveraging the intriguing features of small-world networks, e.g., enhanced signal propagation speed and synchronizability, SWNets enjoy enhanced data flow within the network, resulting in substantially faster convergence speed during training. Our small-world models are implemented via sparse connections from each layer in the traditional CNN to all the succeeding layers. Such sparse convolutions enable SWNets to benefit from longrange connections while mitigating the redundancy in the parameter space existent in prior art. As our experiments demonstrate, SWNets are able to achieve state-of-the-art accuracy in ≈ 2.1× lower number of training iterations, on average. Furthermore, compared to a densely-connected architecture, SWNets achieve comparable accuracy while having 10× reduction in the number of parameters. In summary, due to their optimal graph connectivity and fast response to training, SWNets can be advantageous for smart vision applications.
2019-04-26T13:09:14.224Z
2019-04-09T00:00:00.000
{ "year": 2019, "sha1": "44e27c8be1b0af106f3357dbe66e56069fedead9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "763ac3b80f47074546e4aa753b8845c6f5a6f28a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
233864908
pes2o/s2orc
v3-fos-license
A Scalable Lower Bound for the Worst-Case Relay Attack Problem on the Transmission Grid We consider a bilevel attacker-defender problem to find the worst-case attack on the relays that control the transmission grid. The attacker maximizes load shed by infiltrating a number of relays and rendering the components connected to them inoperable. The defender responds by minimizing the load shed, re-dispatching using a DC optimal power flow (DCOPF) problem on the remaining network. Though worst-case interdiction problems on the transmission grid are well-studied, there remains a need for exact and scalable methods. Methods based on using duality on the inner problem rely on the bounds of the dual variables of the defender problem in order to reformulate the bilevel problem as a mixed integer linear problem. Valid dual bounds tend to be large, resulting in weak linear programming relaxations and making the problem difficult to solve at scale. Often smaller heuristic bounds are used, resulting in a lower bound. In this work we also consider a lower bound, where instead of bounding the dual variables, we drop the constraints corresponding to Ohm's law, relaxing DCOPF to capacitated network flow. We present theoretical results showing that, for uncongested networks, approximating DCOPF with network flow yields the same set of injections, which suggests that this restriction likely gives a high-quality lower bound in the uncongested case. Furthermore, we show that in the network flow relaxation of the defender problem, the duals are bounded by 1, so we can solve our restriction exactly. Last, we see empirically that this formulation scales well computationally. Through experiments on 16 networks with up to 6468 buses, we find that this bound is almost always as tight as we can get from guessing the dual bounds, even for congested networks. In addition, calculating the bound is approximately 150 times faster than achieving the same bound with the reformulation guessing the dual bounds. 1 Introduction Table 1. Summary of scalability of previous literature on worst-case attack problem in terms of number of buses in the network as well as cardinality in the attack budget. In Salmeron et al. (2009), the authors present a generalized Benders Decomposition algorithm based on the assumption that the total load shed cannot increase by more than the capacity of any one grid component when that component is attacked. This algorithm is capable of solving the problem on networks with more than 5000 buses, but the scaling is not shown to accommodate increases in the size of the attack budget. For physical attacks, limited attack budgets are likely realistic, but for cyber attacks, attackers are not limited by physical resources and could therefore be able to attack large portions of the grid that might not be geographically correlated. In addition, the method is only exact if the assumption holds, which is not necessarily true in congested networks. The authors of Sundar et al. (2018) consider a probabilistic version of the problem which they solve with an algorithm similar to the benders approach in Salmeron et al. (2009). They compare several formulations for power flow in the defender problem, including the network flow restriction we analyze in this paper. The computational study explicitly shows the boundaries of tractability in terms both the network size and the attacker budget. Their approach scales to networks with up to 2,383 buses, and attack budgets of up to 5 components. The authors of consider yet another variation of the problem in which the attacks are assumed to be spatially or topologically correlated. With a similar benders approach, they are able to solve on networks up to 240 buses with attack budgets up to 6 lines. In Sundar et al. (2021), the authors revisit this model and develop a cut generation algorithm based on a penalty-based reformulation. In this methodology, the only bounds on the DCOPF needed are bounds on the dual variables corresponding to the thermal limit constraints. Some of these are fixed to 0 for lines that can never be at full capacity. The authors compare an exact version of their method, where the duals that are not fixed to 0 are bounded by the total load in the system, to a heuristic method where they bound these duals by 1. The number of iterations required for the heuristic version to converge tends to be at least an order of magnitude less than the exact method. In summary, most of the existing methodology requires valid bounds on the dual variables of the DCOPF linear program to be exact. Since these are large, the scalability of exact methods is limited in terms of the size of the network and the size of the attack budget, as is summarized in Table 1. The assumptions in these methods are symptoms of a broader problem in bilevel optimization: All methods of dualizing the inner problem in order to combine it with the outer problem require relatively tight upper bounds on the dual variables of the inner problem (Smith and Song (2020)). Though it is common to use heuristics to calculate big-M values with which to linearize the KKT conditions of the inner problem, Pineda and Morales (2018) show that these heuristics can fail, even for bilevel problems with linear programming leader and follower problems. Furthermore, Kleinert et al. (2020) show that verifying the correctness of big-M values in bilevel optimization is as hard as solving the original problem. They suggest that, if we choose to solve bilevel problems by reformulating the follower's problem using duality or its KKT conditions, then we will have to resort to problem-specific information in order to generate valid big-M values. Last, while methods such as covering decomposition from Israeli and Wood (2002) are both exact and applicable to this problem (and do not require a big-M), note that the cuts to block previously-generated attacks are included in the benders algorithm from Salmeron et al. (2009), implying that without enhancement, this is not a scalable approach. Despite the fact that the scalability of the existing approaches for the worst-case attack model is limited, there has been continued interest in the literature in solving extensions of this model and more complicated models which include this model. Bienstock and Verma (2010) develop a problem-specific algorithm for a variation of the problem where the attacker minimizes the number of lines necessary to attack in order to achieve a prespecified amount of load shed. In addition, the authors provide a novel model in which the attacker antagonistically modifies the resistances of the power lines. Further extensions include the addition of transmission line switching as an option for the defender in Delgadillo et al. (2010) and Zhao and Zeng (2013), inclusion of both short-and medium-term impacts of attacks in Wang and Baldick (2014), modeling attacks which unfold over time in Sayyadipour et al. (2016), modeling coordinated cyber and physical attacks in Li et al. (2016), and, as previously mentioned, adding the assumption of spacially correlated physical attacks in and Sundar et al. (2021). In addition, there has been interest in trilevel planning problems such as defensive hardening of the network in Yuan et al. (2014), Alguacil et al. (2014), and Wu and Conejo (2017). In this work, we revisit the network flow restriction from Sundar et al. (2018). That is, instead of solving DCOPF in the defender problem, we drop the Ohm's law constraints, simplifying the inner problem to capacitated network flow. This is a restriction of the original problem, as it expands the defender's feasible region, thus restricting the attacker's options. Applying it to get a lower bound for the worst-case relay attack problem, we show it can be used on networks with more than 6000 buses with attack budgets ranging from small numbers of relays up to 30% of the network, enough to shed all of the load. While such an approach only gives a lower bound, we formally show that, when line capacities are large enough, the optimal objective value of the network flow restriction is the same as that of the original worst-case relay attack problem. This is because network flow is a good approximation of DCOPF when both formulations are projected into the space of injections. That is, the line flows in the optimal solution of the network flow restriction may be dramatically different from those of DCOPF, but the load shed and generator dispatch will be the same. Since the formulation measures the severity of the attack in terms of load shed, the accuracy of the line flows will not effect the attack solution unless the network is congested. While we do expect the attacker to take advantage of his ability to create congestion, we find empirically that, even on congested instances, the bound we get from the network flow restriction is almost always as tight as we can find when we solve the original problem reformulated with improvised dual bounds. As is observed in Roald and Molzahn (2019), in DCOPF, very few line limit constraints are ever tight, even accounting for variation in both demand and generation costs. In other words, in practice, transmission networks are rarely congested. Thus, it is not unexpected that the network flow restriction bound appears to be high quality. In addition, we find that we can obtain this bound within 20 minutes, even on large-scale networks with difficult-to-solve attack budgets. Though this is likely because capaciated network flow is a familiar and highly-optimized problem for commercial solvers, it is worth mentioning that network flow interdiction is itself a well-studied problem with some promising theoretical results which might eventually be applied to solve the network flow restriction. For example, Chestnut and Zenklusen (2017) give an approximation algorithm for an interdiction problem where the attacker eliminates edges in order to minimize the maximum s-t flow. With slight modifications (i.e., modeling generators and loads with mock lines to a super source and super sink respectively), the network flow restriction can be modeled as a maximum s-t flow, so Chestnut and Zenklusen (2017) and other combinatorial methods are applicable to it. In summary, our contributions are: 1. A theoretical analysis of the network flow lower bound showing its quality on uncongested networks, and 2. A computational study on 16 networks of various size and levels of congestion, showing both that the network flow lower bound scales well computationally and that the quality of the bound is comparable to that of methods using heuristic bounds on the dual variables of DCOPF, even for congested networks where the theoretical results do not hold. In the remainder of this paper, we introduce the worst-case relay attack model in Section 2, introduce the network flow restriction in Section 3, state the main theoretical results related to it in Section 4, present a computational study demonstrating its efficacy in Section 5, and provide concluding thoughts in Section 6. Problem Formulation In this section, we introduce the notation we will use throughout the paper, as well as the worst-case relay attack model itself. Nomenclature We will use the following notation to describe the model. Worst-Case Relay Attack Formulation The bilevel model is as follows: The attacker maximizes the total load shed and the defender minimizes it in (1a). Constraint (1b) ensures that the attacker does not exceed the cardinality budget for the number of relays he can attack. Constraints (1c)-(1e) enforce that if a component is unavailable, a relay which connects to it must have been attacked. The following three sets of constraints (1f)-(1h) enforce that if a relay connected to a line, generator, or load (respectively) is attacked, that component is unavailable to the defender. Constraints (1i)-(1l) give the domain of the attacker's variables. The defender's feasible region is defined by constraints (1m)-(1t). Constraints (1m)-(1n) represent Ohm's law when v k = 1 and are trivial when v k = 0. Constraint (1o) enforces power balance at each node. Constraints (1p)-(1t) enforce variable bounds and turn off components which are unavailable as a result of the attack. Note that we assume that the generator dispatch lower bound is 0. We do this to ensure that the defender problem is feasible for all attacks, since it is always feasible to generate no power and shed all the load. Network Flow Restriction We propose solving a restriction of problem (1) in which we drop constraints (1m) and (1n), which consequently removes the phase angle variables θ. That is, we propose solving: (2) Note that (2) gives a lower bound to problem (1) since it expands the feasible region of the defender, giving him more options to respond to the attack, and therefore decreasing the load shed from the attack. We formulate a single-level bilinear reformulation of (2) by taking the dual of the defender problem: Note that the objective function is bilinear, but since all the bilinear terms are products of a binary and a non-negative continuous variable, it is easily linearized if we have bounds on the continuous variables. We can show that for this problem, 1 is a valid upper bound for all the dual variables. Observation 1. In the formulation (3a)-(3i), it is valid to add the constraints The proof of Observation 1 relies on both the fact that, after the constraints corresponding to Ohm's law are removed, the inner problem's constraint matrix is totally unimodular and the fact that all the coefficients of the objective are 1. Note that without the latter property, our results may not hold. We give a formal proof of Observation 1 in Appendix B, section B.1. The application of the observation yields a mixed integer linear reformulation of (3): Note that the optimal value of (4) is a lower bound to that of (1). Suppose z * is the optimal objective function value of (4) and let (δ * , u * , v * , w * ) be optimal solution of (4) corresponding to the attacker's variables. Since we have removed the constraints corresponding to Ohm's law, it is possible that if we fix the attacker variables (δ, u, v, w) to (δ * , u * , v * , w * ), the defender has no DCOPF-feasible solution. In particular, this case implies that fixing (δ, u, v, w) to (δ * , u * , v * , w * ) will lead to a higher load shed than z * . Thus, in order to (i) obtain a feasible solution to our original problem (1) and (ii) possibly improve the bound obtained from (4), we apply the steps described in Algorithm 1. In the remainder of the paper, we call the the resulting lower bound obtained from the load shed corresponding to the solution returned by Algorithm 1 the network flow lower bound (NFLB). In Section 5, we will show that empirically we find that (4) is efficiently solvable, even for large networks, and, as far as can be measured, NFLB is a high-quality lower bound. In the following section, we give some theoretical results which provide intuition for the good quality of NFLB. Theoretical Analysis of the Quality of the Network Flow Restriction Though there are various notions of congestion in power networks, in this paper we describe a network as congested when the thermal limits on the transmission lines prevent a solution with less load shed. Note that, in a DCOPF model, a bound on the phase angle difference can also be a source of congestion, but for a line k, this is only the case when where Θ is the phase angle difference bound. That is, phase angles will become the limiting factor in how much power can be moved through the network if the maximum phase angle difference multiplied by the susceptance provides a tighter bound on the line flow than the thermal limit does. However, in the case that (5) is true, we can replace the thermal limit with the left-hand side of (5) and drop phase angle difference bounds from the problem. Thus, without loss of generality, in this paper we model DCOPF without phase angle difference bounds and consider congestion to be caused by restrictive thermal limits. In the following, we show that, when the thermal limits are sufficiently large, it is always possible to find a DCOPF solution with the same injections as a network flow solution on the same network. Throughout this section, we will assume that there is exactly one generator per bus. Buses which do not have a generator can be represented as having a generator with maximum capacity 0, and buses with multiple generators can be represented as having one generator with capacity set to the sum of the capacities of the originals. This is again because we assume that the minimum dispatch for a generator is always 0. For notational convenience, we first introduce some definitions. Definition 1. Represent the network as a digraph G(B, K) and let N ∈ {0, 1, −1} |B|×|K| be the node-arc incidence matrix (where buses are nodes and lines are arcs). Let {d b } b∈B be a set of injections such that b∈B d b = 0 1 . Then we say: • The injection vector d is flow-polytope feasible if there exists a vector of flows that satisfies thermal limits and flow conservation given the nodal injection values, i.e., d is flow-polytope feasible if there exists f ∈ R |K| such that • The injection vector d is DCOPF feasible if there exists a flow vector that satisfies thermal limits, flow conservation given the nodal injection values, and Ohm's law. That is, d is DCOPF feasible if there exists f ∈ R |K| such that Note that we do not require that θ satisfy the bounds given in (1t). The set of DCOPF feasible injections is contained in the set of flow-polytope feasible injections. We will show that the reverse is also true for uncongested networks, that is, when the thermal limits are sufficiently large. Definition 2. Given a connected digraph G(B, K), consider a partition of the nodes formed by removing all the cut-arcs in the underlying graph and labeling the sets of nodes in each of the resulting connected components as V 1 , V 2 , . . . , V m . Let Note that, in power networks, a partition of the nodes into more than one non-empty set is unusual since a cut-arc represents a single point of failure. Thus we expect that for many of these networks, r(G) = |B|. Intuitively, the non-cut-arcs are the only ones for which the thermal limits could restrict our ability to find DCOPF feasible flows for a set of injections. This is because, on a tree (and in the absence of phase angle bounds), there always exist phase angles such that a flow-polytope feasible flow is also DCOPF feasible. Thus, the injections are certainly feasible. Stated more simply, Ohm's law poses no additional restriction on flows if there are no cycles in the network. Thus, our notion of "large enough" thermal limits only applies to arcs which appear in cycles. Formally, we have the following theorem, which we will prove in Appendix B, section B.2: and when we contract the nodes in V i into one node, the resulting graph is a tree. Let r(G) := max 1≤i≤m |V i |. (Note that we can always select r(G) = |B|.) Recall F k is the thermal limit on arc k. Let B max and B min be the maximum and minimum susceptance respectively. If d ∈ R |B| is flow-polytope feasible and then d is also a DCOPF feasible injection. Essentially, this means that, in uncongested networks, network flow is a good approximation for DCOPF when we consider the space of feasible injections. More precisely: Corallary 1. Consider the following problem: and its relaxation Let G(B, K) be the network digraph, r(G) be as defined in Theorem 1, and D ∈ R |B| be the vector We provide a proof of this result in Appendix B, section B.3. It follows that, when the thermal limits respect the bound given in Corollary 1 and we disregard phase angle bounds, the network flow relaxation of the worst-case relay attack problem is tight. Tightness of the Bound from Theorem 1 As we will show through our empirical study in Section 5, we believe that in practice, the bound on the thermal limits in Theorem 1 is quite conservative. That is, even for thermal limit values much smaller than the bound given in Theorem 1, NFLB is the same optimal value as the original problem. However, we can show that for artificial instances the result is tight within a constant: Proposition 1. There exists a constant c and a family of digraphs {G n (V n , E n )} n∈N where all susceptances are equal to 1 with corresponding injections d n ∈ R |V n | and lim n→∞ d n for all non-cut-arcs e then d n is a flow-polytope feasible injection but not a DCOPF feasible injection. A formal proof is provided in Appendix B, section B.4, but an example of such a digraph is shown in Figure 1. In essence, we can construct a digraph where the thermal limits satisfy the requirement of the theorem, but the triangles prevent there existing phase angles such that the lines can be used at capacity. In practice, such an instance would be surprising, since most arcs in real networks are non-cut-arcs in order to prevent a single point of failure. Computational Results The current state-of-the-art. As discussed in the Introduction, the current state-of-the-art to obtain a lower bound for (1) is to solve single-level reformulations of (1). We present single-level reformulations of (1) in Appendix A. In the first formulation, problem (9), we use logical constraints to give an exact reformulation of (1) because we do not specify upper bounds on the dual variables. It is possible to express this model using Gurobi's IndicatorConstraints. While this problem is exact, we will show later in this section that the computational time to solve it is prohibitively large. We therefore also consider a mixed integer linear programming reformulation of (1), given in (10), in which upper bounds on the dual variables (M ) are used to linearize the implications in (9). Again, as discussed before, we typically do not have good knowledge of these bounds M , and thus solving (9) with a heuristic value of M leads to a lower bound. Our Goal. Since the network flow restriction also provides a lower bound for the worst-case relay attack problem (i.e. NFLB), we seek to answer two questions in this section: 1. Computational Tractability: What is the computational advantage of solving the network flow restriction over approaches that solve the single-level MILP obtained using arbitrary/heuristic bounds on the dual variables? 2. Quality: What is the quality of the network flow restriction solutions, i.e., that of NFLB? Question 1. In order to address Question 1, we compare the time it takes to obtain NFLB with the time it takes to find a solution with single-level reformulations of (1). In particular, after using Algorithm 1, we try to get a sense of the time it takes to find an equivalent-quality solution using the current state-of-the-art. Therefore we proceed as follows: • We run Algorithm 1, which involves solving (4) followed by solving an instance of DCOPF. We record the time needed to run Algorithm 1. • We have two choices for single-level reformulations of (1): -Solve (9): We cut off the run when the lower bound is equal to the NFLB or terminate after 4 hours if we did not achieve the NFLB before then. Unfortunately, while this method is the most attractive theoretically since problem (9) is an exact reformulation, we found that, even on the smallest test case, that it is much slower to solve than (10), taking over 4 hours to prove optimality on a 118-bus case with a 5% attack budget. For this reason, we only report results with this model where we cut off at NFLB. -Solve (10): We first need to decide M values. We set the value of M in (10) to be the ceiling of the largest dual variable of the linear program solved in line 2 of Algorithm 1. For all of our test networks, applying the above procedure, we found M = 2 or M = 3. It is possible we have discarded better solutions with this choice of M , but we at least ensure that we do not cut off the solution we have already found. In addition, in experiments not reported here, we attempted larger values of M for the smaller cases and found that the solution time scales badly with increases in M . In addition, we still did not find better solutions than the one corresponding to our chosen value of M . Again, we cut off the run of solving (10) when the lower bound is equal to the NFLB or terminate after 4 hours if we did not achieve the NFLB before then. We use Gurobi to solve (10), setting Gurobi's MIPFocus parameter to 1 to prioritize finding good quality solutions. In order to answer Question 2, ideally, we should compare NFLB to the optimal solution of (1). This is difficult to answer since we do not know of a non-trivial upper bound for the worst-case relay attack problem. In theory, we could accomplish this by solving (9). However, as mentioned earlier, we found that solving (9) using Gurobi's IndicatorConstraints does not scale well enough to beat NFLB. We therefore approach this question by comparing to the best lower bound obtained from the single-level reformulation of (10) with the heuristic choice of M described above, warmstarted with the solution we found in Algorithm 1, and given a time limit of 4 hours. Software and Hardware Specification. Our models are implemented in Pyomo (Hart et al. (2017(Hart et al. ( , 2011) using Gurobi 9.0.2 as the solver (Gurobi Optimization, LLC (2018)). The experiments are run giving Gurobi 8 threads on a server with 40 Intel Xeon 2.20GHz CPUs and 251GB of RAM. Test Networks We present results on 16 different networks ranging from 118 buses to 6468 buses and with varying levels of congestion. Details of the networks are given in Table 2. Note that 118Blumsack is the IEEE 118 bus network as modified in Blumsack et al. (2007). This is a very congested network, which we use intentionally since congestion can break down the assumptions on dual bounds used in prior work and also renders our theoretical guarantees moot. The 300Kocuk case is the IEEE 300 bus case, as modified in Kocuk et al. (2016). It also has been modified to be more congested than the original. The other cases are used as they are presented in Babaeinejadsarookolaee et al. (2019). Note that the cases with names ending in ' api' and ' sad' are congested modifications of the instance which shares the prefix of their name. Since these test networks do not include any information about the control systems, for the sake of demonstration, we assume that there is one relay per bus which controls that bus, all the generators at that bus, and all the lines adjacent to the bus. For each of the test networks, we find Table 2. Details of test instances used in the computational study. The last two columns give insight into congestion. We solve DCOPF with no attack and give the percentage of lines operating at their limits in the 'Percentage of Thermal Limits Tight' column and the percentage of buses with a phase angle at its bound in the 'Percentage of Phase Angle Bounds Tight' column. Network Flow Restriction Results Difficulty in solving instance as a function of budget. Both very small budgets and very large budgets turn out to be easier problems for Gurobi. As is also observed in Bienstock and Verma (2010), we typically see that the computational time is longest for mid-range budgets. This is intuitive since for small budgets there are fewer possible attacks, and for larger budgets, the attacker is able to shed all the load in the system, so the trade-offs are no longer interesting. This concept is illustrated for a couple of the test networks in Figure 2. In Figures 2a and 2c, we see that as the attack budget increases, the amount of load shed achievable by the attacker increases and eventually saturates at the total load in the system. Note that these plot the lower bound achieved at the 4-hour time limit, explaining why (9) can have a lower objective value than the other models. In Figure 2b, we see that the most difficult problems computationally are at the elbow of the curve in Figure 2a. Similarly in Figure 2d, for a larger case, all the small budgets are difficult for Gurobi, but the problem becomes trivial after passing the saturation point. Results. The results from the network flow restriction and the experiments on problems (9) and (10) are shown in Tables 3, 4, 5, and 6 for the twelve smaller test instances. The first column shows the attack budget as a percentage of the relays in the system. The second column translates this into an integer number of relays which can be attacked, that is, the value of U for that instance. The third column gives the best known lower bound from among all our experiments. This is the highest known load shed the attacker can achieve, given in per unit 2 . In the "NFLB" columns, "Quality" is (9) and (10) terminating either when the bound reaches NFLB or after 4 hours. The numbers visualized in plots 2b and 2d are given in the "NFLB Time," "Problem (10) Time to NFLB", and "Problem (9) Time to NFLB" columns of Tables 3 and 4. the load shed from the attack found by Algorithm 1 as a percentage of the best known lower bound. The time in seconds that it takes to run Algorithm 1 is reported in the "Time" column. In the next three columns, we report results related to Question 2 above, that is, determining the quality of NFLB. Recall that, in this experiment, we solve (10) using the heuristic M , warmstarting with the solution corresponding to NFLB, and allowing Gurobi a 4-hour time limit. The "Problem (10) Quality" column gives the load shed this experiment achieved as a percentage of the best known lower bound. The "Problem (10) Time" column gives the time for the Gurobi solve. The "Problem (10) Gap" column reports the gap after the 4 hours. Note that this is not a gap with a valid upper bound for the worst-case relay attack problem, but is instead a measure of how close Gurobi was to proving optimality on the particular restriction it was solving, in this case with the dual variables bounded by 2 for all but the 1888rte api and 1951rte api cases, where the dual variables are bounded by 3. That is, Gurobi's upper bound is a bound on the best feasible solution achievable with this restriction. In the last four columns, we report results related to Question 1 from the beginning of this section, in which we compare to solving the worst-case relay attack problem using formulations from prior literature. In these experiments, we do not warmstart the Gurobi solves, and we cut off the solve when Gurobi achieves NFLB, if that is before the time limit of 4 hours. We report the load shed achieved as a percentage of the best known lower bound as well as the time it takes Gurobi to find a solution whose objective value is as good as NFLB when solving (10) and (9) respectively. As mentioned previously, we do not report results where we continue solving (9) after it achieves NFLB because we found it slow to find a solution as good as that obtained by Algorithm 1, even for the smaller test cases. Gurobi hits the 4-hour time limit consistently for the more difficult budgets in the larger of these instances (i.e., Gurobi does not reach the NFLB within 4 hours). Therefore we did not compare with solving either (9) or (10) for the four largest instances, and instead report the results of just the network flow restriction in Table 7. Column "NFLB" gives the load shed from the attack found by Algorithm 1 in per unit, and "NFLB Time" gives the time taken to run Algorithm 1. Quality of NFLB Without a nontrivial upper bound on the worst-case relay attack problem, we cannot comment precisely on the quality of the network flow restriction. However, in comparisons with the lower bound attained from solving with a heuristic bound on the dual variables, we see that in 113 out of 120 instances, NFLB was the best bound. In the 7 instances where NFLB was not the best lower bound, it was 89.25%, 91.73%, 99.77%, 99.06%, 99.88%, 99.31%, and 97.60% of the best load shed found. The budgets for which there is a gap between the best-known solution and the network flow restriction solution tend to be small. This is consistent with the bound from Theorem 1 since for these budgets there is relatively little load shed, meaning that the 1 -norm of the injections is likely quite large relative to its maximum possible value for the instance (when all the load is served), making the right-hand side of (6) large. In this case, the theory suggests that network flow is not as good of an approximation of DCOPF. However, at least for smaller network sizes, Gurobi is able to solve (10) for smaller attack budgets with a heuristic bound on the dual variables, and might be a better option. For larger network sizes, even though we sometimes see a slight gap between the NFLB and (9) or (10), the network flow solution still appears to be of extremely good quality. Additionally, for these networks, (9) and (10) do not scale well enough to be computationally tractable: Among the 90 larger instances tested, (10) fails to achieve the NFLB within 4 hours in 25 instances, and (9) fails to do so in 39 instances. Last, note that even in the congested variations Table 3. Network flow restriction results on the six smallest cases, part 1. For each instance, we show results for 10 different budgets for the percentage of relays that can be attacked. The best known achievable load shed is in the "Best Known LB" column. In the following two columns we give NFLB as a percentage of the best known solution for the instance and the computational time for Algorithm 1. The next three columns show the load shed attained by the solution we get from running (10) for up to 4 hours, the running time, and Gurobi's optimality gap at termination. The last four columns show the quality of the solution achieved and the times for Gurobi to achieve NFLB when solving problems (10) and (9) can be attacked. The best known achievable load shed is in the "Best Known LB" column. In the following two columns we give NFLB as a percentage of the best known solution for the instance and the computational time for Algorithm 1. The next three columns show the load shed attained by the solution we get from running (10) for up to 4 hours, the running time, and Gurobi's optimality gap at termination. The last four columns show the quality of the solution achieved and the times for Gurobi to achieve NFLB when solving problems (10) and (9) respectively. Note that, in the Question 1 results, in cases where problem (10) runs for 4 hours but has a quality of 100.00%, this is a symptom of rounding: The NFLB is not quite achieved within the time limit, but that is not reflected within the two decimal places in this table. Also note that the Question 1 experiment solving problem (10) occasionally finds the best known solution since it can exceed the NFLB in the iteration before it terminates. Table 5. Network flow restriction results on the 'api' congested cases. For each instance, we show results for 10 different budgets for the percentage of relays that can be attacked. The best known achievable load shed is in the "Best Known LB" column. In the following two columns we give NFLB as a percentage of the best known solution for the instance and the computational time for Algorithm 1. The next three columns show the load shed attained by the solution we get from running (10) for up to 4 hours, the running time, and Gurobi's optimality gap at termination. The last four columns show the quality of the solution achieved and the times for Gurobi to achieve NFLB when solving problems (10) and (9) Table 6. Network flow restriction results on the 'sad' congested cases. For each instance, we show results for 10 different budgets for the percentage of relays that can be attacked. The best known achievable load shed is in the "Best Known LB" column. In the following two columns we give NFLB as a percentage of the best known solution for the instance and the computational time for Algorithm 1. The next three columns show the load shed attained by the solution we get from running (10) for up to 4 hours, the running time, and Gurobi's optimality gap at termination. The last four columns show the quality of the solution achieved and the times for Gurobi to achieve NFLB when solving problems (10) and (9) Tables 3 and 4, and the second plot shows the data from the "NFLB Time" column from Tables 3, 4, and 7. Instance of the test networks shown in Tables 5 and 6, the quality of NFLB is good despite the theoretical results not holding. Computational Tractability of Algorithm 1 In Tables 3, 4, 5, and 6, Algorithm 1 takes less than 4 minutes on all of the instances of the problem tested. In Table 7, Algorithm 1 takes less than 22 minutes in all cases, and often takes less than 5 minutes. In contrast, when solving (10), Gurobi times out without proving optimality within 4 hours for the hardest budgets on all but the smallest test case. For the nine larger cases, Gurobi takes more than 4 hours to find a solution of the same quality as NFLB using problem (9) and with the heuristic value of M in problem (10). In essence, we see that scaling up the size of the network for difficult attack budgets is not feasible solving a linearization of the single-level reformulation of (1), even with small heuristic bounds on the duals. However, we can easily find what we believe to be a good-quality solution for even a 6,468 bus network using Algorithm 1. These observations are visualized in Figure 3: In Figure 3a we plot on a log scale the computational times to solve (10) linearized using the heuristic value of M . There is noise in the 7% and 10% budgets because the most difficult budgets in that range depend on the particular network, not just the number of nodes. However, in general we see that, even for the easier very large budgets, the solve times appear to scale exponentially. For the smaller, more difficult budgets, we hit the 4-hour time limit for most of the networks. In Figure 3b, we plot computational times for all ten of our test networks on a linear scale. The scaling for this method appears to be roughly linear in the size of the network, where the lower budgets are more difficult and the higher budgets tend to be easier. The spike for the 1% budget on the 3,012 bus instance is consistent for different seeds: It appears to be an anomaly in terms of difficulty for Gurobi. Overall, we find NFLB to be approximately 150 times faster than using Gurobi. We arrive at this number by taking the average of the ratio of the time for Gurobi to reach NFLB using Problem (10) and the time to compute NFLB over the 120 instances tested. In summary, we see through our computational experiments that the most difficult instances of the worst-case relay attack problem are for mid-range budgets on large networks. Solving the traditional linearized single-level formulation including DCOPF in the inner problem does not scale well, even when the bound on the duals is as small as 2 or 3. In contrast, we are able to solve challenging budgets on networks up to 6,468 nodes in less than 25 minutes using the network flow restriction. To the extent it is ascertainable, the quality of solutions is good. Conclusion In this work, we analyzed a restriction of the worst-case relay attack problem which has theoretical guarantees on uncongested networks and which we have also shown empirically to provide a highquality lower bound, even on congested networks. We have shown that, in addition to the apparent tightness of the lower bound, the network flow restriction can be solved efficiently and to scale with a commercial MIP solver. We suspect this is due in part to the fact that the network flow restriction can be linearized with big-M values of 1, and also in part to the familiar, well-studied structure of network flow itself. In future work, there is a need to consider upper bounds for this problem and to improve the scalability of exact methods. Additionally, higher-complexity restoration models have been shown to be important for N − k models when k is large (Coffrin et al. (2019)), so there is a need to find scalable solution methods when the defender problem includes elements such as nonlinear approximations of the AC power flow equations, bus shunts, and line charging. Last, the network flow approximation for DCOPF could be used in place of DCOPF in numerous other problems for both power systems operations and security. Since power systems are rarely congested in practice, it is likely that this approximation can be of use in order to scale up other problems which currently rely on DCOPF. A Single-Level Formulations of the Worst-Case Relay Attack Problem In this appendix, we give the two single-level formulations of (1) that we compare against problem (4). Let ξ + and ξ − represent the duals of constraints (1m) and (1n) respectively, and κ + and κ − be the duals of the phase angle bound constraints in (1t). Then we have the logical formulation: We do not specify upper bounds on the dual variables of DCOPF and we encode constraints (9f)-(9q) using Gurobi IndicatorConstraints. Next, we give a mixed integer linear programming reformulation of (9) with a heuristic upper bound on the dual variables. Let M represent the heuristic bound chosen for the dual variables of the DCOPF problem. We use this bound to give a mixed integer linear representation of the implications in (9f)-(9q): Let I k be the k × k identity matrix. Without loss of generality we may assume that there is exactly one generator per bus. We can do this because we already assumed the generator dispatch lower bound is 0, so if there are multiple generators at a bus, we can aggregate them into one by summing their maximum capacities. Note that this means |G| = |B| in the following. To show the claim, we will show that the elements of any extreme point of the dual polyhedron, defined by (3b)-(3i), are bounded in absolute value by 1. For notational convenience, let {x : Gx ≤ h} be the system (3b)-(3i). Note that G is an integer matrix. Let n = 3|B| + 2|K| + |G|, the dimension of the dual space. Then an extreme point of the polyhedron is the feasible solution where a subsystem of n inequalities from Gx ≤ h hold at equality. Let G denote the square submatrix of G corresponding to this subsystem. By Cramer's Rule, this means that we can calculate the ith component of that solution:x where G j is the jth column of G. Since G is integer, we know that |det(G)| ≥ 1. This means that In the following, we show that the right-hand side of (11) is 1 by showing the matrix in question is totally unimodular. We will show that G h is totally unimodular since that means any submatrix of G h is totally unimodular. Writing the columns corresponding to the ordering of the variables (λ + , λ − , µ, γ, α, β), we can write where, without loss of generality, we relabel the generators so that we get the identity in the part of the matrix corresponding to µ in constraints (3c). We use N to represent the node-arc adjacency matrix of the network, which is known to be totally unimodular. We use 1 to represent the vector of all 1's in R |B| . From (12), we see it suffices to show that is totally unimodular since G h augments A by a series of identities and the negative of the first row. It is easy to verify that A is totally unimodular, for example it satisfies the conditions of the theorem by Hoffman (Heller and Tompkins (1956)). By the definition of total unimodularity, it follows from the claim above and equation (11) that −1 ≤x i ≤ 1 for all i. This means that 1 is a valid upper bound for α, β, λ + , λ − , γ, and µ, and -1 is a valid lower bound for µ in problem (3). B.2 Proof of Theorem 1 We will first establish some lemmas before we give a proof of Theorem 1. Let N be the |V | × |E| node-arc incidence matrix of a connected digraph G(V, E). We remind the reader of two facts: • The rank of N is |V | − 1. • Let N (i) be the matrix where the ith row if N is removed and let N i be the ith row of N . Then That is, we can calculate any given row of the matrix by taking the negative of the sum of the other rows. In the following Lemma, we derive the injection shift factor formulation for DCOPF. Proof. Let θ (0) be the vector of phase angles where we have removed the component corresponding to the 0th node. Then f and θ (0) must satisfy Combining (14) and (15), Since N (0) is full row rank and B is a diagonal matrix with all entries positive, N (0) B(N (0) ) is invertible. This means Set the phase angle of the 0th node to 0. Then by (13), the resulting phase angles and the vector of flows above satisfy nodal balance constraints and Ohm's law constraints. Given a square matrix H, let λ max (H) be the largest eigenvalue of H. Lemma 2. Let A ∈ R m×n be a matrix with full row rank. Then λ max (A (AA ) −1 A) = 1. Proof. The matrix A (AA ) −1 A is an orthogonal projection matrix, so all of its eigenvalues are 1 or 0 (since it is idempotent). Since A has rank m, so does A (AA ) −1 A, so m of them are 1, and we have the result. Proof. Proof of Theorem 1. Since d is flow-polytope feasible, let f nf ∈ R |K| be the flow vector that satisfies thermal limits and nodal balance constraints given the node injection values d. We must show that there exists a flow vector that not only satisfies nodal balance constraints given the injections d and thermal limits, but also Ohm's law. Claim 1: It is sufficient to prove the DCOPF polytope is non-empty on each of the subgraphs corresponding to V i , where we may assume that the 1 -norm of the injections on the vertices V i is at most d 1 . Claim 1 is straightforward to verify, so we only sketch the arguments here. For the arcs connecting vertex blocks V i and V j where i = j, we will keep the flow values from f nf . That flow clearly satisfies the thermal limit, and since those arcs are not involved in any cycles, once we find flow values on the incident arcs within each V i , we will be able to find values of θ such that Ohm's law will also be satisfied. Thus, the problem reduces to finding flows within blocks of nodes V i for i ∈ {1, 2, . . . , m}. It is straightforward then to show that the 1 -norm of the injections on the vertices V i is at most d 1 . Consider a block V (we drop the superscript for simplicity), recalling that it has at most r(G) nodes. Let the net injections on the nodes be d V such that d V 1 ≤ d 1 . For simplicity of notation, we will refer to the subgraph on V as H(V, E). Let N ∈ {0, 1, −1} |V |×|E| be the node-arc incidence matrix of H. Let B ∈ R |E| be a diagonal matrix with B ee equal to the susceptance on arc e. Let v 0 ∈ V be an arbitrarily chosen reference bus, let N (v 0 ) be as defined before, and let d (v 0 ) be the vector where we have removed the component corresponding to v 0 from d V . Then by Lemma 1, the unique flow that satisfies the DCOPF constraints on block V is Let √ B be a diagonal matrix whose (e, e)th entry is √ B ee . Claim 2: There exists a vectord ∈ R |E| such that We will show that there exists x ∈ R |E| such that N (v 0 ) x = d (v 0 ) and x 2 ≤ √ r(G)−1 2 d V 1 . This completes the proof since we can then findd by solving √ Bd = x. In the solution, we will havẽ d ≤ 1 √ B min x, since √ B is a diagonal matrix and √ B min is the smallest diagonal entry. This means that , as required. Let N (v 0 ) = P Q where P is composed of columns corresponding to the arcs of a spanning tree in H(V, E). This means that P is a full row rank square matrix, and furthermore that it is totally unimodular since it is the adjacency matrix of a bipartite graph. Solve and let x = d 0 . So it is sufficient to show that d 2 ≤ √ r(G)−1 2 d V 1 . Note that, by (22),d is a flow on the tree corresponding to P where the injections on the nodes are given by d (v 0 ) . Note that since P represents a tree, the removal of any arc of the graph disconnects the graph. If we remove arc i, let S i represent the set of nodes in the component containing o(i). Using this notation, this means that for all arcs i,d i = j∈S i d V j . By Lemma 3, this means that |d i | ≤ 1 2 d V 1 . Finally, the support ofd is at most r(G) − 1, since a tree on r(G) nodes has r(G) − 1 arcs. So showing Claim 2. Now, we can rewrite (21) as where the second equality holds by Claim 2 and the last holds since √ B is symmetric. Therefore where the last inequality follows Lemma 2 and from Claim 2. By the assumption of the theorem, F e ≥ Bmax B min √ r(G)−1 2 d 1 , for all e ∈ E. Thus the f above is feasible, completing the proof. B.3 Proof of Corollary 1 By construction, problems (7) and (8) are both bounded and feasible (since it is always possible to shed all the load and since l is bounded). Also, since (8) is a relaxation of (7), z l ≤ z * . It is sufficient to show that there exists a solution to (7) with the same objective value as (8). Let (f ,l,p) be an optimal solution to (8). Since 0 ≤l ≤ D and 0 ≤p ≤ P , it is sufficient to show that the system has a feasible solution. Sincep ≥ 0 and since the injections satisfy global balance, that is we have that D −l −p 1 ≤ D −l 1 + p 1 = 2 D −l 1 ≤ 2 D 1 , where the first inequality follows from the triangle inequality, the equality comes from (24) and the fact thatl ≤ D and 0 ≤p, and the last inequality again follows froml ≤ D. Since the feasibility of system (23) follows from Theorem 1, completing the proof. B.4 Proof of Proposition 1. Let n ∈ N be given. We construct G n (V n , E n ) as follows: • |V n | = 3n. Number the nodes from 1 to 3n. There is a flow-polytope feasible flow on G n given by f (3i+1,3i+2) = f (3i+2,3i+3) = f (3i+1,3i+3) = i + 1 2 for i = 0, 1, . . . , n − 1 and f (3i+3,3i+4) = i + 1 for i = 0, 1, . . . , n − 2. However, we can show that these injections are not DCOPF-feasible. To see this, consider the triangle formed by nodes 3n − 2, 3n − 1, and 3n. We know that we have an in-flow of n units to node 3n − 2, and that all of it must be routed to node 3n. Without loss of generality, suppose the phase angle at node 3n is 0. There are exactly two paths from 3n − 1 to 3n: the arc between them, and the two-arc path via 3n − 1. Each of these paths has capacity n/2, so we must use both paths at capacity. However, this is impossible, as it requires setting the phase angle at node 3n − 1 to n/2 and setting θ 3n−2 to n. But that means the flow on the arc (3n − 2, 3n) is n > F (3n−2,3n) = n 2 .
2021-05-07T01:16:05.206Z
2021-05-06T00:00:00.000
{ "year": 2022, "sha1": "3a166634c1a683faae1f28f62be1bbe6bcdb1dc7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d87235a4b916e20ca80695cdc8b69c37b0975e1b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
259427790
pes2o/s2orc
v3-fos-license
Prediction of Hematoma Expansion in Hypertensive Intracerebral Hemorrhage by a Radiomics Nomogram Objective: To develop and validate a radiomics-based nomogram model which aimed to predict hematoma expansion (HE) in hypertensive intracerebral hemorrhage (HICH). Methods: Patients with HICH (n=187) were included from October 2017 to March 2022 in the Yongchuan Affiliated Hospital of Chongqing Medical University. Patients were randomly divided into a training set (n=130) and a validation set (n=57) in a ratio of 7:3. The radiomic features were extracted from the regions of interest (including main hematoma, the surrounding small hematoma(s) and perihematomal edema) in the first CT scan images. The variance threshold, SelectKBest and LASSO (least absolute shrinkage and selection operator), features were selected and the radiomics signature was built. Multivariate logistic regression was used to establish a nomogram based on clinical risk factors and the Rad-score. A receiver operating characteristic (ROC) curve was used to evaluate the generalization of the models’ performance. The calibration curve and the Hosmer-Lemeshow test were used to assess the calibration of the predictive nomogram. And decision curve analysis (DCA) was used to evaluate the prediction model. Results: Thirteen radiomics features were selected to construct the radiomics signature, which has a robust association with HE. The radiomics model found that blend sign was a predictive factor of HE. The radiomics model ROC in the training set was 0.89 (95%CI 0.82-0.96) and was 0.82 (95%CI 0.60-0.93) in the validation set. The nomogram model was built using the combined prediction model based on radiomics and blend sign, and worked well in both the training set (ROC: 0.90[95%CI 0.83-0.96]) and the validation set (ROC: 0.88[95%CI 0.71-0.93]). Conclusion: The radiomic signature based on CT of HICH has high accuracy for predicting HE. The combined prediction model of radiomics and blend sign improves the prediction performance. INTRODUCTION Stroke is one of the main causes of death and disability in China, and hypertensive intracerebral hemorrhage (HICH) is the most common form of hemorrhagic stroke. 1,2 According to Brouwers's test, approximately 30% of patients who had an intracerebral hemorrhage (ICH) experienced continued bleeding after the initial event. 3 Patients with hematoma expansion (HE), a complication of ICH, are at increased risk of poor outcomes (43.5% vs. 5.8%) and reported a higher 90day Modified Rankin Score (mRS) (67/85, 78.8% vs. 68/172, 39.5%), suggesting increased neurological disability, than patients without HE. 3,4 Early prediction of HE, coupled with rapid implementation of acute interventions, can improve long-term outcomes in this devastating disease. [3][4][5] A head CT scan is the gold standard for the diagnosis of ICH, and HE is typically diagnosed using the morphology, density, volume and other imaging features of the main hematoma. 6,7 Recent studies have shown that imaging markers such as the island sign, blend sign and surrounding edema of HE have predictive effects beyond the main hematoma. 7,8 In addition, age, gender, blood glucose, blood calcium, the Glasgow Coma Scale (GCS), systolic blood pressure (SBP), and diastolic blood pressure (DBP) provide further prediction of HE. [7][8][9][10] To create a prediction model of HE in ICH, we aimed to develop and validate a radiomics-based nomogram, as the nomogram has been widely used as a clinical predictive method in both stroke and ICH. 9,10 The radiomics signature was built by counting the main hematoma and surrounding factors. HE was defined as a 33%-or 6-ml increase in hematoma volume. The head spiral CT scan was performed using a Philips Brilliance 265 iCT machine. The scanning parameters were set at a tube voltage of 120kV, a tube current of 400mAs, the slice thickness was 5mm, the slice interval was 5mm, the pitch was 0.984-1.375, and the matrix was 512×512. Radiomic feature extraction is described in the supplementary data. All CT images were accessed in DICOM format with a CT image brain window width of 80-100HU and window level of 30-40HU. Imaging data was kept in the uAI research portal (Shanghai United Imaging Intelligence Co., Ltd, version: 21130). The region of interest (ROI) of the main hematoma, surrounding small hematoma and edema of the ICH was determined by two experienced radiologists ( Supplementary Fig.1A, B, C). Fifty CT images were randomly selected to determine the inter-and intra-observer agreement of ROI-based feature reproducibility by reader-1 (a radiologist with four years of experience) and reader-2 (a radiologist with 30 years of experience). Reader 1 repeated the same procedure in the one month follow up. An inter-and intra-class correlation coefficient (ICC) greater than 0.75 indicated good agreement of the feature extraction. METHODS A total of 2259 radiomic features were extracted from images which were preprocessed by Z-score normalization to eliminate difference. Methods of dimension reduction, variance threshold, SelectKBest and least absolute shrinkage and selection operator (LASSO), were used to select the best predictive feature. A radiomics signature was calculated for each patient via a linear combination of selected features that were weighted by their respective coefficients. The association of the radiomics signature with HE was first assessed in the training set and then validated in the validation set using a Mann-Whitney U-test. All processes were run using the uAI research portal (Shanghai United Imaging Intelligence Co., Ltd, version: 21130). A multivariate logistic regression model was performed to identify the independent factors among radiomics signature, clinical variables, and radiographic features to identify HE and NHE in the training set. A radiomics nomogram was constructed based on the multivariate logistic regression model. A radiomics signature was calculated for each patient using the formula constructed in the training set. Statistical analysis: RStudio 3.3.2 and SPSS 26.0 were used for statistical analysis. Measurement data with homogeneity of variance of normal distribution were expressed as mean ± SD, and an independent sample t-test was used to compare differences between groups. Continuous variables with non-normal distribution were expressed as [M (Q1, Q3)], and differences between groups were analyzed by Mann-Whitney U-test. Enumeration data were expressed as numbers and percentages, and differences between groups were compared by chi-square test or Fisher exact test. Binary Logistic Regression was used to establish the prediction model for the indicators with statistical differences. An ROC curve was used to analyze the predictive value of the models. The prediction ability of the nomogram was measured by a calibration curve. The Hosmer Lemeshow (HL) test was used as the model fitting index to judge the gap between the predicted value and the real value. If the p-value is bigger than 0.05, it indicates that there is no significant difference between the predicted value and the real value. The HL test was performed to assess the goodness-of-fit of the nomogram, and a decision curve analysis (DCA) was carried out with the best model. P value <0.05 was considered statistically significant. RESULTS The baseline clinical and imaging data of the training set and the validation set are shown in Table-I. There was no significant difference in general statistical data between the two groups (P > 0.05). A total of 2259 imaging features were extracted from each patient, and 13 robust features were screened out by the variance threshold method, SelectKBest100 and LASSO regression analysis. Radiomics scores of patients were calculated according to the features and their coefficients (Fig.1A). Features with nonzero coefficients were selected and the radiomics score (rad_score) was calculated and converted to a probability (range 0-1) of HE for each patient by using the following formula: Where x k represents the selected radiomics features, a k was the respective coefficients, and e refers to the Euler number (e = 2.71828). The 13 features were converted into a radiomics signature displayed in Fig.1B. In the training set, the differences of blend sign and Rad_score were statistically significant (P<0.05) in HE and NHE groups (Tables-II and III). The nomogram model was conducted to visualize the results of the multivariable logistic regression analysis which was generated by the Rad_score and blend sign. As shown in Fig.2, a patients' imageomics score was 0.03513, and there was no mixed sign. The corresponding score is 88.2 in total, and the probability of the existence of HE is 59.3%. The calibration curve of the radiomics nomogram for the probability of HE in ICH demonstrated good agreement between prediction and observation in the two sets ( Fig.3A and B). The Hosmer-Lemeshow test ( Fig.3C and D) yielded a non-significant statistic in both the training and validation set, which suggested that there was no departure from a perfect fit. The C-index for the prediction nomogram was 0.90(95% CI 0.83-0.96) in the training set and 0.88(95%CI 0.71-0.93) in the validation set. Clinical Use: The decision curve analysis (DCA) showed that a threshold probability of a patient or doctor of 10%, using the radiomics nomogram to predict HE, adds more benefit than either the treat-all-patients scheme or the treat-none scheme (Fig.4). Within this range, net benefit was comparable, with several overlaps, based on the radiomics nomogram and the model with histologic grade integrated. DISCUSSION In this study, the occurrence of HE in patients with HICH was predicted by our combined radiomic nomogram model. This nomogram was built with 13 imageomic features screened by the Lasso Logistic regression model, and the corresponding weighting coefficient. In addition to one shape feature, 12 of the 13 selected features in the imageomics model are microscopic information that cannot be obtained by the naked eye. The shape feature describes the geometric characteristics of RIO. The shape feature in this study was the volume of voxels. The existence of voxel volume further verifies that HE is closely related to the volume of hematoma and edema in patients with HICH. 10,11 In addition, the 10th Percentile, 90th Percentile, GLDM, GLCM, GLSZM and other histone features are also included. 12 These features, due to the differences in cell structure, are shown as the spatial relationship and density differences between CT image pixels. 12,13 Imagomics converts the observable and unobservable image information into deep level features for quantitative analysis to achieve repeatability and stability. 11,13 The AUC in the training group was 0.89 (0.82-0.96), and that in the validation group was 0.82 (0.80-0.97). Similar to the prediction efficiency of only drawing the hematoma ROI, our method can also accurately predict the occurrence of HE, solving the problem of assessing an irregular hematoma, which is difficult to draw. 14 In this study, blend sign was screened as a clinical, independent factor, which was first proposed by Li Q et al. 6 and was based on the observation of nonenhanced CT scanning. The presence of blend sign and CTA spot sign were independent predictors of hematoma growth. 15,16 Although CTA spot sign and leakage sign are of high value in predicting the occurrence of E in patients with primary cerebral hemorrhage, they have shortcomings such as excessive radiation dose, high cost, and risk of allergic reaction. 13,17 Therefore, they cannot be used as the first choice for examination, while the blend sign is convenient and suitable for general use. 9,11,12 Hematoma density is affected by its components, specifically, hemoglobin is an important factor that determines the hematoma density on a CT scan. 11,13,18 When blood clots, the hematoma appears as high density on a CT scan. When there is active bleeding, the hematoma tends to be a lower density than the clot. The mixed blood at different bleeding times leads to the appearance of mixed syndrome. Sporns PB et al. 19 found that both blend sign and spot sign can predict early HE, but also early neurological deterioration, suggesting the value of blend sign may be higher than that of CTA spot sign. By adding independent clinical factors based on the radiomic model, we can easily find that the combined radiomic model shows good predictive effectiveness. The ROC of the training group was 0.90 (95CI% 0.83-0.96), and the ROC of the validation group was 0.88 (95CI% 0.71-0.93). Our combined prediction model of radiomics and blend sign had higher predictive performance than the previous model, constructed by only using radiomics. Therefore, the combined model may be a better choice for predicting HE in patients with HICH. 10,14,19,20 Limitations: This retrospective analysis used data from a single institution, with a small sample size. Furthermore, the sample sizes of the HE and NHE groups are unbalanced, and as such there may be selection bias. CONCLUSION The combined prediction model of radiomics and blend sign successfully predicted HE in patients with HICH. The results presented here suggest that this assessment model can be used to guide early clinical diagnosis to reduce the mortality and poor prognosis of patients with HICH. Specifically, this assessment model can aid in the prediction of whether patients with HICH will experience HE.
2023-07-11T01:54:58.153Z
2023-06-15T00:00:00.000
{ "year": 2023, "sha1": "598103c6733b440c44f30f886d7f68ed4ea04927", "oa_license": "CCBY", "oa_url": "https://www.pjms.org.pk/index.php/pjms/article/download/7724/1804", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3c54ce105f6344d6ca09ecaa10961a19397c36d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248542802
pes2o/s2orc
v3-fos-license
Effects of aerobic and resistance exercise on glycosylated hemoglobin (HbA1c) concentrations in non-diabetic Taiwanese individuals based on the waist-hip ratio Background Glycosylated hemoglobin (HbA1c) reflects the average blood sugar over the past eight to twelve weeks. Several demographic and lifestyle factors are known to affect HbA1c levels. We evaluated the association of HbA1c with aerobic and resistance exercise in non-diabetic Taiwanese adults based on the waist-hip ratio (WHR). Methods We conducted this study based on TWB data collected from 90,958 individuals between 2008 and 2019. We estimated the Beta (β) coefficient and 95% confidence intervals (CI) for HbA1c using multivariate regression models. Results Based on the multivariate analysis, lower HbA1c levels were associated with both resistance exercise (β-coefficient = -0.027, 95% CI -0.037 to -0.017) and aerobic exercise (β-coefficient = 0.018, 95% CI, -0.023 to -0.013). Higher HbA1c levels were associated with abnormal WHR compared to normal WHR (β-coefficient = 0.091, 95% CI, 0.086 to 0.096). We detected an interaction between exercise and WHR (p for interaction = 0.0181). To determine the magnitude of the interaction, we performed additional analyses (with the reference group being ’abnormal WHR with no exercise’) and observed substantial decreases in HbA1c regardless of the WHR and exercise category. However, the largest reduction occurred in the ’normal WHR and resistance exercise’ group (β = -0.121, 95% CI, -0.132 to -0.109). Conclusions We found that normal resistance exercise, coupled with a normal WHR was significantly associated with lower HbA1c levels among non-diabetic individuals in Taiwan. Introduction Hemoglobin A1c, formed by the nonenzymatic glycosylation of hemoglobin is essential for monitoring the management of chronic diabetes [1]. It is the main fraction among the various glycated hemoglobins and appears to be affected by iron deficiency and vitamin B12 deficiency [2]. Its measurement is essential for long-term glucose control, where a concentration of less than 7% is considered the target for good control in most cases with diabetes [3]. According to the 2009 International Expert Committee Report on the Role of the A1C Assay in the Diagnosis of Diabetes, HbA1c is considered a more stable biological index than fasting plasma glucose (FPG) and oral glucose tolerance test (OGTT) [4,5]. Associations have been reported between HbA1c levels and cardiovascular diseases in both diabetic and non-diabetic patients [6][7][8][9]. Individuals with elevated HbA1c concentrations are more likely to develop complications related to diabetes such as coronary lesions [10]. Harvey and colleagues observed a significant decrease in major adverse cardiovascular events (MACE) among individuals with diabetes who had HbA1c levels below 7% [11]. Mortality associated with diabetes mellitus was reportedly 15.3% higher among patients who had HbA1c levels above 8% than in those with levels below 6% [12]. Besides glucose, demographic factors including age, sex, body mass index, and other variables have been investigated for HbA1c variability [13]. Results pointed to BMI and sex-specific differences in HbA1c variability among the study participants. In epidemiological studies, WHR and BMI were positively related to HbA1c among cases with prediabetes [14] and type 2 diabetes [15]. On the contrary, negative correlations have also been reported [16]. According to a previous study investigating anthropometric indices, WHR remains a superior index to predict T2D [17]. As noted above, WHR and BMI are both positively correlated with HbA1c, a screening tool to detect Type 2 diabetes. However, to our knowledge, numerous research works in Taiwan have focused on BMI. Exercise and HbA1c are essential for glycemic control in people with diabetes. In prior studies, resistance training was shown to significantly lower HbA1c levels in people with diabetes mellitus [18][19][20]. The results of meta-analyses indicated that both resistance and aerobic exercise significantly improved HbA1c levels among diabetic individuals [21]. Another systematic review and meta-analysis suggested that high-but not low-intensity resistance exercise was more beneficial for improving HbA1c levels in patients with diabetes mellitus (i.e., 0.61% vs. 0.23% reduction) [22]. HbA1c levels were substantially affected by resistance exercise compared to aerobic exercise [23]. Most of the studies describing correlations between changes in HbA1c and sociodemographic factors have focused on individuals with diabetes mellitus. In light of this, we evaluated the association of HbA1c with aerobic and resistance exercise in non-diabetic Taiwanese adults based on WHR. Participants, data source, and extraction We obtained data from the TWB data source, which is the first large-scale biological database in Taiwan. Its purpose is to collect genetic, environmental, clinical, and lifestyle data and track the health of at least 200,000 adults for at least ten years. Additionally, this data source provides scholars and experts with important information about the causes and mechanisms of common diseases in Taiwan, which can lead to the improvement of health treatment policies and prevention strategies. Subjects in the biobank were between 30 and 70 years old and did not have a history of cancer. These participants had signed informed consents at various centers during assessment visits (between 2008 and 2019). Demographic and clinical data were available for 132,720 subjects in the biobank. We excluded those with diabetes mellitus (n = 12,711) and those with missing or incomplete information (n = 29,051). Our final analysis models included data from 90,958 subjects. We received ethical approval from the Institution Review Board of Chung Shan Medical University (CS1-21197). Outcome, exposure, and covariates The primary outcome was HbA1c and WHR and exercise were the exposure variables. During recruitment, participants in TWB completed a questionnaire indicating how often, how long, and what kind of exercise they engaged in. Exercise was categorized as aerobic, resistance, and no exercise. Resistance exercise studied consisted of weight training, ball games, or a combination of both. On the other hand, aerobic physical activities included brisk walking, jogging, Taijiquan, rope jumping, gymnastics, yoga, Gigong, Chinese martial arts, swimming, hiking, biking, basketball, table tennis, soccer, badminton, tennis, golf, aerobic dance, ballroom dance, and hula hooping. Regular exercisers were those who participated in either aerobic or resistance exercise at least three times a week, lasting at least thirty minutes per session. The WHRs were categorized as normal or abnormal based on cut-off points: 0.92 for men and 0.88 for women as defined by the Health Promotion Administration, Ministry of Health and Welfare in Taiwan. It was measured as waist circumference (in cm) divided by hip circumference (in cm). HbA1c concentrations were measured using the automated XN-9000 hematology analyzer (Sysmex, Kakogawa, Japan). Discussion Our analysis of TWB data indicated that compared with no exercise, aerobic and resistance exercises are associated with decreased levels of HbA1c in adults with no history of diabetes in Taiwan. However, reductions in HbA1c levels were greater in participants who performed resistance exercises (β-coefficient = -0.027 vs. -0.018). The same trend had also been noted among individuals with diabetes [23,24] even though according to Yang and his team, it is yet to be proven whether resistance exercise differs from aerobic exercise regarding their effect on cardiovascular risk markers [21]. Additionally, our findings also indicated that normal compared to abnormal WHR raised HbA1c levels. It is important to note that the association of HbA1c with anthropometric parameters was also examined in populations without diabetes [25]. The results indicated that the degree of association varied. We detected an interaction between exercise and WHR in relation to HbA1c (p = 0.0181). Our stratified analysis (with 'abnormal WHR and no exercise' as the reference group) showed that HbA1c levels decreased regardless of the subgroup. However, the greatest reductions occurred in the 'normal WHR and resistance exercise' group. In this group, the reduction in HbA1c was 0.121%. These results suggest that both WHR and exercise may be essential for regulating glycated hemoglobin. As far as we know, this is the first study to examine resistance and aerobic exercise in tandem with WHR among Taiwanese adults without diabetes. These findings add to our understanding of the factors controlling HbA1c concentrations, although further studies would help clarify their mechanisms. In the general model, we also found that HbA1c was positively associated with age, abnormal compared to normal WHR, current compared to nonsmoking, and male compared to the female sex. Similar results have been previously reported among those with diabetes and prediabetes [26,27]. In one of these studies [27], current smokers had higher HbA1c levels than non-smokers, by 0.08% (0.9 mmol/mol) and this was linked to oxidative stress. However, Nakagami and colleagues published contrary results for Japanese populations [28]. The WHR has been used to a lesser extent compared to other anthropometric parameters [15]. In line with previous findings [29,30], we also found higher HbA1c values in participants with hypertension and hyperlipidemia. We acknowledge a few limitations. First, exercise data were collected from self-reported questionnaires within the biobank; hence recall bias might have possibly occurred. Next, resistance exercise intensities have been investigated for glycemic biomarkers in patients with type 2 diabetes, where patients who performed high-intensity resistance exercise showed lower HbA1c levels than those who performed low-intensity resistance exercise [22]. However, in the current study, information was not available on the exercise intensity or volume. Conclusions For the very first time, we have provided evidence that normal WHR together with resistance exercise may be associated with greater HbA1c reduction among non-diabetic individuals in Taiwan. Resistance exercise and WHR may be essential for strategies aimed at managing or improving HbA1c.
2022-05-07T06:23:09.357Z
2022-05-05T00:00:00.000
{ "year": 2022, "sha1": "2c85f48a168e9886c770e45859a5f67926184283", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "af5587c371c782f1d028a0db89d020ee2b4d2f4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
157073315
pes2o/s2orc
v3-fos-license
Yu Kilchun ’ s Concept of Reform of the Tax System in the Korean Empire During the Kabo Reform period, the tax system was reformed as a measure to increase the national budget and to stabilize the livelihood of the general public. In the process of collecting taxes, harmful practices, such as excessive demands and exploitation of commoners by local magistrates (suryŏng) and the isŏ class (composed of hyangni, local functionaries, and sŏri, petty clerks), surfaced as problems. Therefore, the government aimed to exclude the isŏ class from the tax collection process and to separate the tasks of tax imposition and collection. The Korean government during the Kabo Reform period deprived local magistrates and the isŏ class of the right to collect taxes and appointed tax officers to handle tax related matters. However, this endeavor soon foundered due to the opposition of the yangban class and resistance from the isŏ class, which lost its economic base as a result. After the so-called Introduction During the Kabo Reform period, the tax system was reformed as a measure to increase the national budget and to stabilize the livelihood of the general public.In the process of collecting taxes, harmful practices, such as excessive demands and exploitation of commoners by local magistrates (suryŏng) and the isŏ class (composed of hyangni, local functionaries, and sŏri, petty clerks), surfaced as problems.Therefore, the government aimed to exclude the isŏ class from the tax collection process and to separate the tasks of tax imposition and collection. The Korean government during the Kabo Reform period deprived local magistrates and the isŏ class of the right to collect taxes and appointed tax officers to handle tax related matters.However, this endeavor soon foundered due to the opposition of the yangban class and resistance from the isŏ class, which lost its economic base as a result. 1 After the so-called Ŭlmi Incident (the assassination of Empress Myŏngsŏng by the Japanese), the tax imposition and collection system was reverted back to the form it had been prior to the Kabo Reforms, and local magistrates and the isŏ class once again took charge of levying and collecting taxes. Understanding the isŏ class's exploitation of commoners in the tax collection process as a problem, Yu Kilchun pursued tax reform of the Chosŏn government during the Kabo Reforms.In October 1895, as the Minister of Internal Affairs (naemu taesin), he requested the discussion and the majority vote on the "Provision on hyanghoe (district assemblies)" and "Regulations on the management of the Community Compact System (hyangyak)" to systematize hyanghoe and to discuss tax-related matters at hyanghoe.However, with the local magistrates and the isŏ class once again in charge of the tax administration, such an attempt faced limitations.Eventually, Korea became subject to the local administration reform under Japanese colonial rule, with the right to collect taxes still in the hands of the local magistrates and the isŏ class. Yu Kilchun's "Semubu (Tax Department)" is a document on taxation created after the failed attempt to revoke the right to collect taxes from local magistrates and the isŏ class during the Kabo Reforms.Existing studies on Yu's concept of the system of public finance have used Sŏyu kyŏnmun (1895, Observations on travels in the West) and "Chijeŭi" (1891, Viwes on the land system), "Sejeŭi" (1891, Opinions on the tax system), and "Chaejŏng kaehyŏk" (year unknown, Financial reform), which are essays included in the "Kyŏngje kaehyŏngnon (Theories of economic reform)" section in Yu Kilchun chŏnsŏ 4 (Collected works of Yu Kilchun). 2Using the above sources, studies have noted that Yu argued for tax reform centered on land tax reform, which was modelled on the land tax reform (J.chiso kaisei) of the Meiji government.Yu's tax reform was considered a "landlord system-based reform," as he attempted to resolve socioeconomic contradictions through tax reform while maintaining the landlord system. 3The sources and documents mentioned above were also3 used in studies that analyzed Yu's economic reform ideas as representative of a modern Korean intellectual, as well as studies on the land tax reform theory of the Enlightenment Party (kaehwap'a) in 1894.Most of these studies also concluded that Yu's theory on economic reform was conceived from the perspective of landlords. 4Recently, a study revealed that the key to Yu's financial reform theory was the theory of increasing taxes, which was a break from the traditional concept on financial affairs. 5revious studies on Yu's financial reform and tax reform have focused on the period before the Kabo Reform; these studies mainly discussed the national land system and the central tax system, since the only available sources were the ones listed above. However, considering that Yu personally participated in the Kabo Reforms as the vice-minister, and later minister, of Internal Affairs and lived in Japan for 12 years afterwards in exile, continuing the discourse only with sources from before the Kabo Reforms has many limitations and leaves much to be desired.In comparison, "Semubu (Tax Department)," a 3 Kim Yongsŏp, 1974 newly discovered source, was written in the Taehan Empire period, and it addresses tax systems in regions outside of the capital city.Therefore, we expect that this document will provide an important opportunity for improving and enhancing the discourse that began in previous studies."Semubu" is one of the documents that was donated by Yu's descendants to the Korea University Museum.Written in mixed Korean and Chinese script, the document is made up of 65 pages and 9,236 characters.With the exception of empty pages, 22 pages consist only of sentences, while 25 pages contain tables.The whole document is equal to the amount of 34 pages of 200-character squared manuscript paper."Sejeŭi (1891, Opinions on tax reform)," authored by Yu in 1891 on a similar topic, was completely written in Chinese characters and consisted of 3,451 6 The document not only uses words but also tables for explanation.Some of the words have been crossed out, blotted, or rewritten.On the bottom center of the page are Chinese characters for "panjungdangp'an."characters on a total of 18 pages, equivalent to 19 pages of 200-character squared manuscript paper."Semubu" was written on ruled "panjungdangp'an" manuscript paper with 12 vertical lines per page.Underlined texts as well as parts that have been blotted out and revised seem to suggest that this is a draft.An analysis of the handwriting suggests that it was written by Yu, as the style of handwriting is the same as that of "Sejeŭi," which had been written in polished Chinese characters without revisions. Only the word "Semubu (稅務部)" is found on the cover of the document.On the next page, the word "semu (taxation, 稅務)" is written on the first line.The document on the left is "Semubu", and the one on the right is Sejeŭi.Although the year the document was written is not specified in the work, "the first year of the Kwangmu Era (1897)" and "the second year of the Kwangmu Era (1898)" were written on the sources, such as "tax statement," which appear as examples in "Semubu."Therefore, it seems safe to assume that Yu Kilchun wrote this draft of "Semubu" during Emperor Kojong's reign in the Korean Empire.During the Kwangmu Era (August 1897 to September 1907), Yu was exiled in Japan.With the collapse of the Kim Hongchip cabinet after Emperor Kojong's flight to the Russian Legation in February 1907, Yu left for Japan on exile and was only able to return to Korea after the enthronement of King Sunjong in July 1907.Judging by Yu Kilchun's position on taxation during the Kabo Reforms and the time "Semubu" was written, it seems safe to assume that this paper was written from a perspective, critical of the reactionary trend in the tax system of the Korean Empire. This study attempts to introduce and analyze the content of the newly discovered "Semubu.""Semubu" is largely divided into three parts: one, reasons for the necessity of tax collection; two, types of taxes; and three, tax collection methods.However, instead of delving into each of the three parts in detail, this paper will analyze "Semubu" as a whole, focusing on the characteristic discussions in this document. Reorganization of Local Administrative Districts The "Issue on the establishment of hyanghoe (district assemblies)," which was discussed by the kun'guk kimuch'ŏ (Deliberative Council) on July 12, 1894, is speculated to have been drafted by Yu Kilchun. 9The issue called for hyangwŏn, or representatives of each myŏn (township) and members of district assembly, to make collective decisions on the continuation or suspension of township-level activities, such as the enactment of legislation and the rectification of various evils of convention.Previous studies have understood this document as an attempt to prevent the ruling class's exploitation of commoners by involving the local people in politics, 10 a part of the movement to establish regional assemblies, 11 or an attempt to institutionalize hyanghoe. 12Hyanghoe appears again in the "Regulations on the payment of the land and poll tax (Kyŏlhojŏn pongnap changjŏng)," which grants hyangwŏn the right to taxation, instead of local magistrates and the isŏ class, and specifies that hyangwŏn should collect and remit land and poll tax (kyŏlhojŏn).This was considered an attempt to increase the national budget by inducing the participation of representatives elected by popular vote in tax administration to reduce harmful practices and remove exploitations of the middlemen. 13n April 1895, the Kim Hongchip-Park Yŏnghyo cabinet announced the Legislation on the Office of Internal Revenue and the Office of Taxation (Kwansesa kŭp chingsesŏ kwanje)," "Regulations on Town Tax Agency (Kakŭp puseso changjŏng)," and "Regulations on Income (Suip kyujŏng)" to eliminate the role of hyanghoe in the process of tax collection and established separate organs for tax collection.The Office of Internal Revenue (kwansesa) was installed in each province, and the Office of Taxation (chingsesŏ) was established in each county and district (Tax Agency in each town) to handle tax related matters instead of local magistrates and officials.Compared to the Deliberative Council's measures, this is understood as a reorganization of the tax system that only took into consideration the state's position, incited by Japanese militarists, and dismissed the participation of the people.14 Hyanghoe appears again in October 1895, in the "Provision on hyanghoe" and "Regulations on the Management of the Community Compact System" announced by the fourth Kim Hongchip cabinet.These had been brought to the discussion table by Yu Kilchun as the Minister of Internal Affairs.The "Provision on hyanghoe" and "Regulations on the management of the Community Compact System" stated that hyanghoe should consist of rihoe (village assemblies), myŏnhoe (township assemblies), and kunhoe (county assemblies), and that hyanghoe should discuss all affairs within their jurisdiction including taxation and put them to majority vote.It was rather significant that hyanghoe were composed of people regardless of their class.However, as members of hyanghoe only played the role of supporting the government and were restricted from participating in politics, hyanghoe was different from a modern assembly.15 Yu Kilchun maintained similar ideas on hyanghoe during the period Korea was governed by the Deliberative Council and the period when Korea was under the fourth Kim Hongchip cabinet, in that Yu wanted to systematize hyanghoe and incorporate it into the government administration."Semubu" also contains Yu's ideas of hyanghoe in the form of chuhoe (state assemblies) and hyanghoe.Yu explained chuhoe as a statelevel assembly, and hyanghoe as a district-level assembly, under the premise that administrative districts were divided into chu and hyang, which were nonexistent at the time. Administrative divisions in Korea during the Chosŏn dynasty consisted of 8 to (province), 5 pu, 5 taedohobu, 20 mok, 75 tohobu, 77 kun, and 148 hyŏn, which were all under the control of the regional government officials who had been appointed by the central government.Kun and hyŏn (county-level divisions) were divided into smaller units, myŏn and ri.While government officials were not dispatched to oversee myŏn and ri, they were under indirect control of regional government officials as collective units of taxation. During the Kabo Reforms in May 1895, however, eight to were subdivided into 23 pu, and kun and hyŏn simply became kun, resulting in the administrative divisions of 23 pu and 337 kun.This was changed again in August 1896, where 23 pu were redivided into 13 to, each of which consisted of 8 pu, 1 mok, and 332 kun.Although the system seems to have reverted back to the time prior to the Kabo Reforms, the fact that 8 to were redivided into 13 and that kun and hyŏn were all simplified into kun 15 Kim T'aeung, 1997, "Kŭndae Chungguk," 50-52; Yi Sangch'an, 1989, "1894-5 nyŏn chibangjedo," 85-86; Yi, 1986, "1906-1910 suggests that the concept of administrative division during the Kabo Reforms was retained during this time. 16ccording to "Semubu," it seems that Yu Kilchun wanted to reorganize the local government system that had been established in 1896: …therefore, there is one hyang, called P'ungnak-hyang, which consists of 12 li-Kasŏng, Insu, Taejŏng, Chŏnghŭi, Changhang, Piripshin-ri, Samsu, Changch'un, Haechŏn, and P'yŏngch'on.P'ungnak-hyang belongs to Kwangnŭng-gun, which is under the jurisdiction of Hannam-ju (州)… 17 As seen in the above excerpt, there were 10 ri-Kasŏng-ri, Insu, Taejŏng-ri, Chŏnghŭi-ri, Changhang-ri, Piripshin-ri, Samsu-ri, Changch'un-ri, Haechŏn-ri, and P'yŏngch'on-ri-in P'ungnak-hyang, which was under the jurisdiction of Kwangnŭng-gun, which was under the jurisdiction of Hannam-ju.There are no explanations as to the actual location of the areas Yu proposed in this document, and these are names that were neither in use during Yu Kilchun's lifetime nor in the present day.However, there are clues that hint to where these names came from.Hannam was a different name for Suwon in Kyŏnggi-do.Han-ju was also an old name for Kwangju as well as the name of one of the nine chu and five sogyŏng (cities) in Shilla.Kwangnŭng was also a different name for Kwangju, Kyŏnggi-do.From this, we can speculate that Kwangnŭng-gun, Hannamju, is approximately present-day Kwangju in the Kyŏnggi-do area. 18 Yun Chŏngae, 1985, "Hanmal chibangjedo kaehyŏk ŭi yŏn'gu" (Study on the local government system reform in the late Korean Empire period).Yŏksas Hakbo 105, 95. 17 Yu Kilchun, "Semubu" (Tax Department), 9. 18 Han'guk yŏksa chimyŏng sajŏn p'yŏnch'an wiwŏnhoe, 2008, Han'guk yŏksa chimyŏng sajŏn (Dictionary of Place Names in Korean History), Yŏgang Ch'ulp'ansa.Kwangju, Kyŏnggi-do, was the place where Yu Kilchun's family burial ground was located, and it was also where Yu Kilchun spent three years in exile after the Foreign Disturbance of 1866 (Pyŏngin Yangyo).Hanam, Kyŏnggi-do, Above all, the most important idea from the excerpt above was that Yu wanted to reorganize Korea's administrative divisions into chu, kun, hyang, and ri.It was an attempt to change the to (province) into chu (state), unify the kun and hyŏn (county-level administrative divisions) into kun (county), and rename myŏn (town) and ri (village), which were small autonomous administrative units under kun, into hyang (district) and ri.where Yu Kilchun's mother is buried, was part of Kwangju up until 1989.19 Gray areas refer to government administration units areas included in direct government management system. Yu Kilchun's reform idea was in line with the trend of administrative division reform, in terms of redividing broad, provincial-level units and unifying smaller county-level units as kun. One noteworthy point is that Yu used "chu" as a provincial-level unit of administrative divisions.Chu had been used in Unified Shilla, when the whole country was divided into nine chu and five sogyŏng, and, before that, in ancient China.The Yu Gong (Tribute of Yu), a chapter in the Xiashu (Book of the Xia dynasty) section of Shujing, the Book of Documents, records that when Yu ruled China, he had divided the whole nation into nine chu (州 C. zhou)-Yuzhou, Jizhou, Yanzhou Qingzhou, Xuzhou, Yangzhou, Jingzhou, Liangzhou, and Yongzhou.Based on this, China was often referred to as "Ku zhou," meaning nine provinces.During the Han dynasty, all of China was divided into 13 zhou, which was divided into kun (郡 C. jun), which was divided into hyŏn (縣 C. xian). In Sŏyu kyŏnmun, Yu had already translated the American administrative divisional unit of the "state" and the British administrative division unit of "county" into chu (ju).For instance, Massachusetts, US, was translated as Masa-ju (磨沙州); Lancashire, UK, was Ranjusa-ju (蘭柱沙州). 20y applying chu, a unit of local government used in ancient China, to Korea as well as to regions in the US and UK, Yu might have intended to create an equal and uniform administrative divisional system for the world. Renaming myŏn and ri into hyang and ri is an idea that also appears in Pan'gyesurok (Records of Pan'gye Yu Hyŏngwŏn), written by Yu Hyŏngwŏn (1622-1673).Referring to the ancient Chinese administrative division system, Yu Hyŏngwŏn had also conceived of the hyang-ri system, instead of the existing myŏn-ri system. 21Myŏn was not an adminis-trative unit that was under direct government control, but hyang was to be a direct production and governing unit with its territory designated by the central government and ruled by hyangjŏng, an appointed government official. Yu Hyŏngwŏn's hyang-ri system and Yu Kilchun's hyang-ri system are similar in that both wanted to incorporate low-level administrative divisional units that had not been under direct government control into a governing unit.In the Chosŏn dynasty as well as during the regional government system reform period of 1895 and 1896, myŏn was not considered a governing unit.Only when Korea became a Japanese protectorate, myŏn became recognized as a regional administrative unit and came under direct government supervision in 1917 with Japan's promulgation of the myŏn system.But Yu Kilchun had conceived of a reform of administrative divisional units, where the myŏn becomes a governing unit, even earlier.Since Yu Kilchun mentioned Yu Hyŏngwŏn in Chijeŭi in 1891, it is possible that he consulted Yu Hyŏngwŏn's idea of the hyang-ri system. 22 Establishment of Local Assemblies and Local Taxes Yu Kilchun argued for the necessity of chu tax for the operation of a chu and of hyang tax for the operation of hyang, and proposed the implementation of local taxes along with the reorganization of administrative division into the chu-kun-hyang-ri system.Specifically, he proposed that national taxes should consist of land tax, business tax, and income tax; chu taxes should consist of added land tax, added business tax, and household tax; and hyang taxes should consist of supplementary land tax, supplementary business tax, and supplementary income tax. As suggested by the names of chu tax and hyang tax, each of local taxes was levied as a surtax on national taxes.Specifically, added land tax was to be "from 15 pun to 20 pun of the national tax," while added business tax was to be "from 7 pun to 15 pun of the national tax." 23This meant the amount of added land tax could be set from 15 to 20 percent of the national land tax, and added business tax from 7 to 15 percent of the national business tax.Similarly, hyang taxes were to be "from 10 pun to 20 pun" regardless of the tax category, which meant that the supplementary land tax and supplementary business tax could be set from 10 to 20 percent of added land tax and added business tax, respectively. 24Exploitation occurred in the tax collection system where local magistrates and the isŏ class were in charge of collecting taxes, mainly because of the lack of finances in local governments and because the isŏ class did not receive official pay from the government before the Kabo Reforms.Moreover, all the expenses required for tax collection needed to be raised by local government offices. The members the Enlightenment Party (kaehwap'a) had designed a plan to introduce a local tax system, but it was not implemented in reality. 25Instead, part of the land and poll tax was designated as expenses for kun, but it was not enough to cover the demands of all local government offices, and most of it was used to pay wages to government office employees. 26Moreover, local governments had to send finances to the central government, which further reduced their own revenue, and the burden of a lack of finances for local governments was shifted to the general public, who were inevitably exploited in the process of tax collection. 27herefore the implementation of local taxes was meaningful in that they were separate from national taxes and legally guaranteed stable finances for local governments. 28u Kilchun seems to have planned to collect local taxes as surtax on national tax because he could not find different sources of taxation for local taxes.Since local taxes were to be charged as surtaxes, it would have been greatly influenced by the fluctuations in national taxes.Fundamentally, however, it was very likely for the taxpayers to understand local taxes as an increase in national taxes, even more so because local taxes were to be levied as surtaxes.As a result, the establishment of local taxes was likely to see resistance. 29he resistance against taxation caused by the implementation of local taxes was to be resolved through the creation of local assemblies.Yu Kilchun conceived the implementation of local taxes along with the establishment of local assemblies-chuhoe and hyanghoe. Chu taxes are levied for the administrative management of each chu.Residents of each chu are responsible for chu taxes.Therefore, chu taxes are designed to serve a specific purpose and separated from national taxes.If the tasks of imposing and collecting chu taxes are delegated to provincial governors, and the problems of exploitation surface, there is no way to eliminate the problems.To prevent exploitation, chuhoe are established, and members of chuhoe are elected according to the election law from among the residents of each kun within the chu.Every winter, the members of chuhoe gather at the chu office to discuss and set the annual budget and expenses for the next year.The members decide on the amount of chu taxes to be levied, depending on the level of the burden of administrative tasks involved.The budget proposal should be prepared and submitted by the governor of the chu to the members of the chuhoe, who will then make the decisions.The members of chuhoe review the proposal and add or remove necessary provi- sions.However, any changes can be made only after majority consent of the members of chuhoe, assuring that a shortage of expenses does not occur... 30 Hyang taxes are levied for the administrative management of each hyang...The amount of taxes should be more than one tenth and no more than one twentieth of Kun taxes.In special circumstances, members of hyanghoe may decide without bias to propose a larger budget, but it cannot be implemented without the authorization of the ministers of internal affairs and finance.Also, the budget for ordinary tax revenue and expenditure need to be determined by the hyanghoe and authorized by kun'gam (county magistrates)... 31 Chuhoe was given the authority to elect members of the assembly through elections, deliberate and adjust the annual budget proposal prepared by the state governor, and set the amount of tax to be levied.Less information is available on hyanghoe, but hyanghoe had the authority to deliberate on the budget of ordinary tax revenues and expenditures. In the Deliberative Council's concept of hyanghoe from 1894, only the authority to impose taxes was assigned to myŏnhoe, but Yu's "Semubu" assigned the superior authority to deliberate on the budget to chuhoe and hyanghoe.This idea of guaranteeing the right to deliberate on the budget, which contributed to the development of modern parliamentary system, to chuhoe and hyanghoe was groundbreaking at the time.It was a measure to not only eliminate exploitation and similar practices carried out by local magistrates and the isŏ class in the process of tax collection but also to fundamentally resolve the medieval governing system. Tax Collection System Reform Yu Kilchun also devised a different concept for the imposition and collection of taxes.The Deliberative Council assigned the right to impose taxes to hyanghoe to eliminate local magistrates and the isŏ class from the process of taxation, but in "Semubu" Yu assigned this right to hyanggam (district superintendent). Since tax source review and taxable amounts are determined in the budget plan, hyanghoe is solely responsible for gathering and collecting taxes.However, reviewing tax information is not possible without hyanghoe.As for the land tax, real estate ledgers are kept at the hyang-level; for business tax, the register of operating businesses are also kept at the hyang-level; income tax and other fundamental information for taxes are basically kept at the hyanglevel.Therefore, it is impossible not to consider hyang as the base unit of local government administration... 32 The government ordinance to assign kun'gam the responsibility to collect taxes by themselves all year long cannot be done in reality.Therefore, currently, a large number of the members of the isŏ class and military officers are dispatched to hyang and ri and cause disturbances.Therefore, in this system, hyanggam and hyangsegam (district tax officers) are assigned to the duties of managing taxes of one hyang each and carry out the order of superior authorities.This creates three benefits.The first is the elimination of evil practices, such as tax imposition without cause, omission of taxpayers, double taxation, deception, and tax evasion.Second, it can get rid of the evil practices of the isŏ class, military officers, and local magistrates, who inculpated innocent people and robbed them of their properties.Third, hyanggam and hyangsegam are constantly aware of the circumstances in the hyang.Therefore, they can collect taxes within a certain period after figuring out the economic situation of the hyang.This would help reduce the number of people who postpone or fail to pay taxes... 33 Hyanggam were government officials who were in charge of hyang, an administrative divisional unit with governmental authority, with the change of the administrative divisional system into the chu-kun-hyang-ri system.With the assembly given the right to deliberate on the budget, all other administrative activities, such as tax source review, taxation, tax collection and payment, were assigned to officials at the hyang-level.Yu believed that this would eliminate evil practices carried out by the isŏ class and also reduce late or delayed tax payments as hyanggam would have a better understanding of the territory under his jurisdiction than kun'gam (county magistrate), who were in charge of overseeing a higherlevel and bigger region. Yun conceived and proposed a relatively specific process of taxation, tax collection and payment through hyang, which can be summarized as follows: 1.As for national taxes, the head of a (kun) tax office delivers the list of taxes, which contains a specific tax amount for each category, to the hyanggam. (For chu taxes, kun'gam delivers the list of taxes to hyanggam.)2. The hyanggam reviews the content of the list of taxes and compares it with the register of land taxes and register of operating businesses.When there are differences, the hyanggam compares the list of taxes with sources from city tax offices and county offices to correct the differences.3. The hyanggam records the content of the list of taxes accordingly: land tax information should be recorded in the periodical land tax 33 Ibid., 25-26. ledger, and business tax should be recorded in the periodical business tax ledger.4. The hyanggam issues a tax invoice, which contains the tax type and amount to be paid, to the residents (taxpayers) of hyang. 5.The taxpayer pays the taxes to the district tax office by the due date.6.The tax officer notifies the hyanggam of the receipt of tax payments.7. The hyanggam records the date of the receipt of tax payments in the periodical ledger, stamps half a stamp each on the paid tax amount recorded in the periodical ledger and the tax invoice.The hyanggam also stamps a seal in the center of the tax invoice and returns the tax payment register to the tax officer.8.The tax officer also stamps a seal on the periodical tax ledger, halves the tax invoice, keeps one half in the district tax office and issues the other half to the taxpayer. Tax was to be handled at the hyang level once the list of taxes was delivered from kun to hyang.And the government office at the hyang level was to be in charge of managing the tax roll, which lists the types of taxes and can be used as a resource for determining tax amounts aside from the list of taxes, periodical ledgers by category of taxation, which contains the tax amounts and tax payers at the hyang level, and the tax invoice by category of taxation, which is issued to the taxpayers.The officials in charge of taxation at the hyang level were hyanggam and hyangsegam, and all taxes were to be paid at the district tax office.This shows that Yu Kilchun thought to work on specializing tax work by establishing official positions and institutions for the handling of taxes and storing of related documents at the hyang level. Under this structure, the authority of counties in tax related matterstax source review, taxation, and tax collection-was taken away from the kun and given to the hyang.In this way, Yu's concept of the tax system in "Semubu" was a measure to eliminate the local magistrates and the isŏ class, which had caused problems of exploitation of the taxpayers, in the process of tax collection. Then, would there be no possibility of such exploitation cause by local magistrates and the isŏ class recurring at the hyang level?Considering that government officials would be able to scrutinize the local people's situations since the hyang was a smaller governing unit than the kun, and that excessive taxation can be prevented in advance by the local assemblies which had the power to deliberate the budgets and fix the tax amounts, there was a low possibility of such exploitation happening in the new system.Moreover, as local taxes were to be established solely to cover the expenses of regional governments, the possibility of imposing exorbitant tax was definitely lower than before. In sum, Yu Kilchun designed a tax system, in which tax administration was conducted at the hyang level, a sub-unit of kun.In this aspect, his concept of the reorganization of administrative divisions was ultimately focused on changing myŏn, a non-governing administrative unit, to hyang, a governing administrative unit.And his argument for the establishment of local taxes was also centered on stabilizing the foundation of the operation of hyang by providing independent finances to cover government expenditures.On such a basis, Yu wanted the hyang to take responsibility for tax service and administration to conduct close and detailed tax collection activities.This was a concept that could secure stable national finances and reduce the authority of the local magistrates and the isŏ class, who had been dispersed throughout the kun.The "Periodical Ledger" of land tax, a type of national tax.A hyanggam receives land tax statements from the county tax office, checks the amount of taxes that the hyang must pay, and records the amount of land tax that each taxpayer must pay in the periodical ledger.When the taxpayers of a hyang pay their taxes, the hyanggam records the date tax was paid and stamps a seal across the pages of the periodical ledger and the tax invoice.In order to "extend the power of the people," Yu Kilchun proposed the division of tax payment period into two.The first period for land tax payment is from July 1 to December 30, and the second from January 1 to June 30; the first period for business tax payment is from January 1 to June 30, and the second from July 1 to December 30.Hyanggam then stamps a seal across the "Periodical Ledger" and the tax invoice so that each is marked with half a seal; as well as across the pages of the tax invoice numbered "Pyŏng No. 1" and "Pyŏng No. 2." The tax invoice is then returned to the hyangsegam.Hyangsegam then files the page "Pyŏng No. 1" at the tax office and returns the page "Pyŏng No. 2" to the taxpayer. Conclusion "Semubu," a document housed at the Korea University Museum, is assumed to be a document authored by Yu Kilchun during his exile in Japan during Emperor Kojong's reign in the Korean Empire.It criticizes the changes in the tax system, in which the authority to impose and collect taxes had been taken away from the local magistrates and the isŏ class during the Kabo Reforms but was once again returned to them. Specifically, Yu Kilchun devised a concept of tax system reform on the premise of the reorganization of the administrative districts into the chukun-hyang-ri system.This idea was similar to the regional system reforms from 1895 and 1896, when provincial-level administrative divisions were reorganized and smaller-level divisions were unified into kun.Yu's idea of naming the provincial-level administrative divisions "chu," which was taken from the ancient Chinese system, and renaming kun to hyang was similar to Yu Hyŏngwŏn's concept of the hyang-ri system, in that both wanted to make myŏn (hyang) a governing administrative unit, placed under direct government control. To fund the operation of local governments, Yu proposed to create local taxes, chu taxes and hyang taxes, separate from national taxes, and these taxes were to be levied as surtaxes on the national taxes.Yu also hoped to establish chuhoe and hyanghoe, which were regional assemblies with the authority to deliberate on the budget.The authority to review tax sources, levy and collect taxes was given to the hyang, a small unit of administrative division that was to become a new governing unit. This was a change from the previous system, in which the authority to review tax sources, levy and collect taxes belonged to the kun.By imbuing this authority to the hyang, Yu Kilchun planned to remove local magistrates and the isŏ class from the tax collection process.Yu's concept of the tax system definitely was designed to reduce exploitation of the taxpayers: the hyang was a smaller unit than the kun and therefore its administration could interact with its residents more closely; a local assembly was to be established to be in charge of deliberating on the budget and determining the tax amounts; local taxes were also to be instituted as a separate financial source for the operation of hyang-level government offices. Yu Kilchun's writings on finances and the tax system that have been explored in previous studies mostly originated from before the Kabo Reforms.In terms of content, these documents mainly proposed land tax reform, and the central government's tax system reform.Since the "Semubu" discusses the reorganization of administrative divisions and local tax administration, as well as local tax system reform, the discovery of this text is significant, as it expands the range of the reform ideas proposed by Yu Kilchun, and furthermore the Enlightenment Party.Yu Kilchun devised a concept of tax system reform on the premise of the reorganization of the administrative districts into the chu-kun-hyang-ri (state-countydistrict-village) system.Yu's idea was to make myŏn (hyang, district) a governing administrative unit, placed under direct government control. To fund the operation of local governments, Yu proposed to create local taxes, chu taxes and hyang.Tax amounts were to be determined by local assemblies, chuhoe and hyanghoe, which were given the authority to deliberate on budget. The authority to review tax sources, levy and collect taxes was given to hyang, a small unit of administrative division.By imbuing this authority to hyang, Yu Kilchun planned to exclude local magistrates and the isŏ class in the tax collection process. Since "Semubu" discusses the reorganization of administrative divisions and local tax administration, as well as local tax system reform, the discovery of this text is significant, as it expands the range of the reform ideas proposed by Yu Kilchun, and furthermore the Enlightenment Party. Figure 1 . Figure 1.Pages of the draft of Semubu 6 Figure 3 . Figure 3. Cover and the first page of Semubu 8 Figure 4 . Figure 4. Administrative Divisions Before and After the Kabo Reform and Yu Kilchun's Concept of Administrative Divisions 19 Figure 5 . Figure 5.An Example of the "List of Taxes" 34 Figure 7 . Figure 7.An Example of Tax Invoice 36 <Abstract> Yu Kilchun's Concept of Reform of the Tax System in the Korean Empire Yang Jinah Yu Kilchun in "Semubu (Tax Department)" criticizes the trend of the tax system, in which the authority to impose and collect taxes had been taken away from the local magistrates and the isŏ class (composed of hyangni, local functionaries, and sŏri, petty clerks) during the Kabo Reform was once again returned to them. Table 1 . Yu Kilchun's Concept of National Taxes, Chu Taxes, and Hyang Taxes in "Semubu" 29 After the failure of the Local Tax Regulations in 1907, the Japanese Residency-General of Korea tried to avoid the term "local tax" when enacting the Local Expenditure Law in 1909.The reason was that "using the term local tax could aggravate the people as if a new tax is being imposed on them.
2019-05-19T13:03:38.204Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "8f6fc243ec8498b55013f9051f05fed5d55332cf", "oa_license": "CCBYNC", "oa_url": "http://ijkh.khistory.org/upload/pdf/ijkh-21-2-49.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8f6fc243ec8498b55013f9051f05fed5d55332cf", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Economics" ] }
237770204
pes2o/s2orc
v3-fos-license
Special Issue on Digital Twins in Industry Digital twin (DT) is an emerging and fast-growing technology which provides a promising way to connect and integrate physical and virtual spaces seamlessly [...] Introduction Digital twin (DT) is an emerging and fast-growing technology which provides a promising way to connect and integrate physical and virtual spaces seamlessly.In brief, a DT is a digital representation of a physical object or system.It has bi-directional communication capability with the physical twin through sensors and networks.DT is an evolution and integration of the various information-communication technologies (ICT) that have proliferated the IT scene for the last two decades.It integrates internet of things (IoT), big data, cloud and edge storage, artificial intelligence of things (AIoT), augmented reality (AR), etc. to form a comprehensive communication network for controlling, monitoring, diagnosis and health inspection of equipment and facilities, traffic and transportation systems, buildings, etc. DT has attracted much interest and enthusiasm from the academia as well as the industry.While academia has worked on algorithms and frameworks, industry will be the final implementer as they can see the immediate benefits offered by DT technology.This Special Issue focuses on the industrial applications of DT technology, and it provides insights for practitioners on how DTs can be successfully planned and implemented, as well as the desirable outcomes achieved. Digital Twin Technology and Applications This Special Issue contains 11 chapters covering a broad range of applications, in-depth reviews and integration of DT with other technologies such as AR and Industry 4.0. In the chapter by Wärmefjord et al., they discussed the barriers in the industry that must be overcome before the use of DT for variation management and geometry assurance can be fully utilized.An extensive interview with engineers from eight different companies was conducted.They concluded that 3D models must be kept fully updated in order to maintain a robust digital thread [1]. The chapter by Sepasgozar advocated DT and web-based gaming technologies for online education; not quite an industry application of DT as such, as it is more for educators.Nevertheless, this is useful in view of COVID-19 as much of the face-to-face instruction has become virtual and online [2]. The chapter by Jacoby and Usländer emphasized the importance of interoperability by addressing the need to consolidate the various standards of DT and IoT.A classification scheme was created and applied to the standards, in order to adopt serialization formats and network protocols.This is an important issue as this could lead to smooth and robust operations of DT and the ability to overcome barriers of Industry 4.0 [3]. An industrial application of DT was presented by Bambura et al. who implemented DT for engine block manufacturing processes.They constructed a DT consisting of three layers: physical, virtual and information-processing.Raw data were collected using programmable logic control (PLC) sensors.They concluded that even when only partial results were presented, DT seems to be a prospective real-time optimization tool for the industry [4]. Another industrial application by Sierla et al. proposed a semi-automatic methodology for generating a DT of a brownfield plant, in the area of construction and urban development.As outlined in the paper, many procedures are required to construct a DT.The case study showed that only few manual edits were needed for the automatically generated simulation model [5]. In the chapter by Greco et al., they used a DT to set up models for monitoring the performance of manual work activities with near real-time feedback to support the decisionmaking process for improving working conditions.This is an interesting presentation of a human-centric DT for improving ergonomics and working conditions [6]. Autiosalo et al. presented an integrated DT for an overhead crane, providing a service for machine designers and maintainers in their daily tasks.They showed that a good-quality Application Programming Interface (API) is a significant enabler for the development of DT, and advised traditional industrial companies to start building their own API portfolios [7]. In another industrial application, Pang et al. developed a DT and Digital Thread framework for an "Industry 4.0" shipyard.A new framework which combines the DT and Digital Thread was proposed for better management and to ensure continuity and traceability of information.The twin/thread framework encompasses specifications that include organizational architectural layout, security, user access, databases and hardware and software requirements [8]. The chapter by Pareja-Corcho et al. reported the development of simulation tools for gerotor pumps.The paper is not a direct application of the DT but is a virtual prototype which can be considered in the context of a DT tool.Future work is necessary to further integrate the physical pump with the software tool [9]. Agnusdei et al. presented an interesting chapter querying if DT technology supports safety management.The study analysed existing fields of applications of DTs for supporting safety management processes, and provided a comprehensive bibliometric review to identify future trends between the DT approach and safety issues [10]. Carvalho and da Silva reported a rarely addressed area of DT-based systems in sustainability requirements.They conducted a meta-systematic literature review and concluded that DTs across the product life cycle or the DT life cycle are not sufficiently studied.In addition, they mentioned in their research that it was not possible to find a paper discussing DTs with regards to environmental sustainability [11]. Summary With the myriad of academic and industrial reports on DT development, this Special Issue could only represent a small fragment of the entire DT application scenario; not to forget the highly sophisticated commercial software which has been developed in recent years, which is capable of handling large-scale and complex industrial systems. DT is a promising technology and its impact is yet to be fully realized.
2021-09-28T01:09:14.509Z
2021-07-12T00:00:00.000
{ "year": 2021, "sha1": "d7cca3792f76364e473322542fc932e89876bde9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/14/6437/pdf?version=1626152955", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f7433d03d73d2babe211b2bdcfa2e8ece818a115", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
245281002
pes2o/s2orc
v3-fos-license
Educational Attainment Past the Traditional Age of Completion for Two Cohorts of US Adults: Inequalities by Gender and Race/Ethnicity The vast majority of studies investigating participation in, persistence through, and consequences of postsecondary education focus on educational attainment status among the so-called traditional population of collegegoers between the ages of 18 and 24. This narrow focus leaves largely invisible the role that an expanding set of educational trajectories throughout adulthood plays in shaping social stratification. Using 35-plus and 20 years of follow-up data from the US National Longitudinal Survey of Youth (NLSY)’s 1979 and 1997 cohorts, we find that a substantial share within each cohort is attaining education well into adulthood, and that these trajectories are patterned according to key social and demographic characteristics. In both cohorts, racial/ethnic differences in educational attainment grew over time and, for those attaining the same degree, members of historically disadvantaged groups did so at an older age. Cohort differences in trajectories emerged, however, when considering the intersection of race/ethnicity and socialized gender. Through careful descriptive analysis of two generational cohorts, our study makes clear the role of educational trajectories in the process of cumulative (dis)advantage across the life course, as well as across generations. Introduction Educational trajectories, entailing how much and when people attain education over time, are an often-overlooked pathway through which social stratification takes place (Milesi, 2010). Although the educational career is a "transition-rich long-term trajectory within a highly structured institutional system" (Crosnoe & Benner, 2016, p. 179), research on postsecondary education continues to focus predominantly on the six-year window following expected high school completion (Grodsky et al., 2021;Haas & Hadjar, 2020). Similarly, both policymakers and institutions have been slow to recognize and adapt to the reality that upwards of 40% of those who pursue postsecondary education do so beyond age 25 (National Center for Education Statistics [NCES] 2019), and not necessarily full-time or continuously (Bowl & Bathmaker, 2016). Despite all we have learned over past decades about the patterning of educational attainment according to social and demographic characteristics, there is a dearth of systematic knowledge regarding students' trajectories through higher education. Who enters, who finishes, how, and with what consequences (Haas & Hadjar, 2020)? Importantly, how has this changed across generations? Amidst the expansion and diversification of higher education institutions that accompanied the rise of the Baby Boomer generation (Horn & Carroll, 1996), educational pathways have become more flexible and are occupying a larger share of the life course (Weiss & Roksa, 2016). Such flexibility has potentially opened new routes to upward social and economic mobility specifically among females and individuals from minoritized and low-income backgrounds whose participation in postsecondary education has grown during this period of expansion. In this case, we would expect to see narrowing disparities in educational attainment across key markers of social status in the United States, including gender, race and ethnicity, and socioeconomic background. It is also possible that the expansion of educational pathways over the past several generations has erected what is, in reality, only the façade of greater upward mobility for historically disadvantaged populations. These possibilities lead to very different implications for societal inequality. There is a need to better understand the range of trajectories that socially significant subpopulations are pursuing over the life course, how this has changed across generations, and the implications for societal inequality (Ayalon et al. 2008;Crosnoe & Benner, 2016;Elder, 1995;Grodsky et al., 2021). Thus, we explore the number of years of education and the degrees that people attain at different ages, and the extent to which these trajectories differ by select social and demographic characteristics, in the nationally representative United States (US) National Longitudinal Survey of Youth (NLSY) 1979 and 1997 cohorts. By examining variation in attainment patterns over time both within and between generational cohorts, our research underscores the need for continual investments, including research, at multiple levels of society to prevent educational trajectories from persisting as yet one more societal mechanism for the perpetuation of cumulative (dis)advantage. Social and Demographic Patterning of Educational Attainment and Trajectories Within and Across Generations We consider patterns in educational trajectories and attainment across two generational cohorts. The NLSY79 cohort includes individuals born between 1957 and 1964, corresponding to the final years of the Baby Boomer generation. The Pew Research Center defines Baby Boomers as those individuals born between 1946 and 1964 (Bialik & Fry, 2019). The NLSY97 cohort includes individuals born between 1980 and 1984, corresponding to the early years of the Millennial generation, which includes those born between approximately 1981 and 1996. In addition to being America's largest and most racially and ethnically diverse generation to date (Frey, 2018), Millennials can be distinguished from their Baby Boom (and often other generational) predecessors along numerous social, cultural, and policy fronts. For example, Millennials came of age during a period of expanding approval of issues such as the legalization of marijuana and same-sex marriage. Further, their experiences with the rapidly changing demographic composition of America stand in stark contrast to Baby Boomers, who grew up at a time when immigration was at an all-time low (Frey, 2018). Millennials have also delayed marriage and family formation later than did earlier generations (Bialik & Fry, 2019). However, particularly relevant to this research are the educational and socioeconomic patterns that distinguish these two generations. Individuals comprising the Millennial generation are, on average, better educated than preceding generations, corresponding to the continued expansion of participation in higher education between the NLSY79 and 97 cohorts (Weiss & Roksa, 2016). Among Millennials, 39% of those ages 25 to 37 had a bachelor's degree or higher in 2018, compared with just 25% of Baby Boomers when they were the same age. Millennial women have experienced particularly steep gains in educational attainment. Some 43% of Millennial women between the ages of 25 and 37 in 2018 had obtained at least a bachelor's degree compared to 24% of Baby Boomer women at the same age. Moreover, the share of Millennial women with a bachelor's degree is higher than that of men, continuing a trend observed for the first time in the preceding Generation X. What these general patterns do not reveal, however, is whether this rising tide of educational attainment among Millennials is lifting all boats. Here, the picture is less clear. Postsecondary education attainment has also risen for all racial and ethnic young adult groups since the first Millennials were born (Frey, 2018). Yet disparities in educational attainment among White, Black, Hispanic, and Asian adults remain pervasive, with Hispanic and Black individuals 1 3 lagging behind their White and especially East Asian counterparts (Ream et al. 2012;Bailey & Dynarski, 2011;Weiss & Roksa, 2016). 1 Women have made substantial gains in attainment, on average, but these gains have been unevenly dispersed among women of varying racial/ethnic and socioeconomic backgrounds. Of course, differences in educational attainment within and across these two generational cohorts are perhaps unsurprising given the well-documented fact that educational opportunities and outcomes are often patterned according to demographic characteristics (e.g., race/ethnicity, social class). But status differences reveal little about the trajectories individuals are pursuing and how these trajectories vary within and across generations, and little about whether shifts in educational trajectories across generations are ameliorating or exacerbating longstanding inequities in attainment across racial/ethnic, gender, and socioeconomic markers of group membership. Milesi (2010) defines educational trajectories as including "the type of educational experiences individuals have, the timing at which different transitions occur, and the sequence of events within educational levels" (p. 26). Over the past several decades, American students have increasingly utilized a more expansive set of postsecondary educational trajectories. Indeed, as early as 1996, Horn and Carroll observed that the so-called traditional postsecondary educational trajectory-typically defined by college entry immediately after completing high school, full-time attendance at a four-year postsecondary institution, and continuous enrollment until graduation-had become the exception, not the rule. The latest available data indicate that about 40% of US college students are over 25 years old (NCES 2019), a defining characteristic of the non-traditional college population. Researchers also consider a variety of other characteristics to define the non-traditional student population, including especially enrollment, parental, and employment status. Here, too, the data are revealing. Over half of all undergraduates attended college on a part-time basis in 2015 (Chen, Ziskin, and Torres 2020), while about 20% of US college students are parents (Government Accountability Office, 2019) and 40% work more than 30 h a week (Carnevale et al., 2015). Describing Generational Changes in Educational Attainment Across Groups Through the Dual Lenses of Educational Trajectories and Cumulative (Dis)advantage The timing of the data collected for both waves of the National Longitudinal Survey of Youth make these data wellsuited for documenting the trend away from the historically traditional postsecondary educational trajectory and implications for societal inequality. One previous study of educational trajectories in the NLSY 1979 cohort (Milesi, 2010) reported that 48% of those who eventually attended a two-year college and 25% of those who eventually attended a four-year college did not attend college immediately after high school. In that study, adults navigating such "non-traditional" pathways through higher education also were more likely to come from historically underserved backgrounds as reflected by participants' gender, race/ethnicity, and socioeconomic position, a pattern that has been similarly observed among members of the NLSY 1997 cohort (Aughinbaugh, 2008). On the face of it, greater participation in non-traditional trajectories is conceivably neutral in its implications for degree attainment across groups defined by socially consequential status markers such as gender, race/ethnicity, and socioeconomic position. The existence of relationships between these individual characteristics and participation in non-traditional educational trajectories does not necessarily indicate inequality in degree attainment. For this to occur, two conditions must be satisfied, according to Milesi (2010): an association between students' characteristics, background, and skills and their participation in non-traditional trajectories, and an association between students' trajectories and degree attainment. Much of the existing literature on college access and persistence indicate that these two conditions are typically satisfied, such that deviating from a traditional trajectory negatively influences students' likelihood of postsecondary degree completion. What is less clear from this literature is how this has shifted over time. As a result, we know relatively little about the extent to which educational trajectories constitute a mechanism through which disadvantages based on ascriptive characteristics may accumulate not only within but across generations. Cumulative (Dis)advantage Of particular relevance for examining disparities according to socially defined markers of status across time is the concept of cumulative (dis)advantage (CDA; DiPrete & Eirich, 2006;Merton, 1988;Rank, 2009). DiPrete and Eirich (2006) describe CDA as "a general mechanism across any temporal process … in which a favorable relative position becomes a resource that produces relative further gains" (p. 271) and Dannefer similarly defines CDA as "the systemic tendency for interindividual divergence in a given characteristic (e.g., money, health or status) with the passage of time" (Dannefer, 2018, p. S327). Students are nested in families and in schools, navigating relatively complex social lives with peers, and functioning as members of neighborhoods and communities. Research indicates that race, gender, and class dynamics are consequential in each of these domains and that these advantages or disadvantages accumulate over time such that many of the same demographic groups experiencing more disadvantage early in life also attain less education through age 25 and remain at an educational disadvantage that results in widening inequalities later in life (Alon, 2009;2001;Raftery & Hout, 1993). Dannefer (2018) observes that the phenomenon of CDA is grounded in generative social dynamics that often go unrecognized such that observed patterns of increasing inequality are more readily understood than the processes that produce such patterns. Importantly, CDA is transmitted intergenerationally and can serve to perpetuate existing race and class divisions not only within but across generations (Shapiro, 2017), leading researchers to emphasize the benefit of comparative data across cohorts (Dannefer, 2018). To advance understanding of the processual role that educational trajectories may play throughout the life course, we draw on notions of cumulative (dis)advantage. We posit that the study of educational attainment can be improved by the application of theory and methods that attend to education not only as a status of school enrollment/completion but also as a potentially stratifying process inhering within the operation of temporally organized social systems such as educational and market institutions over the life course and across cohorts. Goal of Current Study Our study seeks to explore the breadth and fluidity of Americans' educational experiences across the life course in two national cohorts. In light of changes in the profile of the US college student population, which has occurred alongside widening economic inequality (Duncan & Murnane, 2011), changes in racial/ethnic discrimination (Valdez & Golash-Boza, 2017), increased opportunities for women (Collins, 2009) and a more developed prisonindustrial complex that has disproportionately ensnared men of color (Alexander, 2012), we anticipate changes in attainment trajectories across the NLSY79 and NLSY97 cohorts as we pursue the following research questions: (1) What is the type and timing of education people attain across cohorts? (2) Does the type/timing of education vary by sociodemographic characteristics? We build upon a life-course conceptualization of educational trajectories (Crosnoe & Benner, 2016;Milesi, 2010) and the concept of cumulative (dis)advantage (DiPrete & Eirich, 2006;Elman & O'Rand, 2004;Rank, 2009) to describe the type and timing of attainment for NLSY 1979 participants through age 40 as well as the experiences of the more recent NLSY 1997 cohort, while also considering trends across the cohorts. We explore how much education people attain over different time frames, the timing of educational attainment, and differences in educational attainment by sociodemographic characteristics. In addition to studying changes over time across cohorts, we were also interested in when inequalities emerged within a single cohort. We coded these two complex datasets in a comparative way, using a fine-grained descriptive approach, which is especially appropriate when seeking to identify overlooked problems and generate new hypotheses and issues to further study (Loeb et al., 2017). Especially insofar as opportunities to obtain levels of education ebb and flow with changing social conditions (Müller & Karle, 1993;Raftery & Hout, 1993), it is worth exploring the varying opportunity structures in long-term profiles of school attendance (Roksa & Velez, 2010;Weiss & Roksa, 2016). To the best of our knowledge, this is the first empirical investigation comparing inequalities in later-life educational attainment across two different NLSY cohorts over a substantial portion of the life course. Data Sources The 1979 NLSY is a nationally representative cohort study conducted by the US Bureau of Labor Statistics that recruited 14-21 year-old US males and females in 1979 and conducted in-person and telephone interviews annually until 1994 and then biennially (for further information, see www. nlsin fo. org/ conte nt/ cohor ts/ nlsy79). We use follow-up data through 2013. A complex multistage sampling approach randomly sampled households in the USA, screened for eligible participants, and oversampled Black youth, Hispanic youth, economically disadvantaged non-Hispanic non-Black youth, and individuals serving in the military (CHRR, 2008). The 1997 NLSY, also administered by the US Bureau of Labor Statistics, used a similar multistage sampling strategy to recruit a nationally representative cohort of adolescents ages 12-16 in 1997; participants have been surveyed annually since (more at www. nlsin fo. org/ conte nt/ cohor ts/ nlsy97). 2 Educational Attainment For the NLSY79 and NLSY97 cohorts, we used data through 2012 to assess continued education across the lifespan, with a focus on four ages: 25, 30, 35, and 40. NLSY97 data through 2013 was used to assess continued education past 25 and 30. Each survey wave, participants reported their highest year of education attained as of that date (monthby-month data is available for NLSY97), the highest degree earned (high school/GED, associate's, bachelor's, master's, or doctoral, including professional doctorates) as of that date, and whether they were currently enrolled in school. We used month and year of birth and interview to calculate ages at which educational status was reported. Though NLSY79 educational data were less thoroughly collected, in most survey years, participants reported the month and year in which they earned degrees, allowing us to calculate their age at each degree; for degrees with no associated date, we used the halfway point since the previous interview date. We considered participants to have continued their education past a given age if, at any time after that age, their reported number of years of education increased, they reported earning a higher degree, and/or they were enrolled in high school or higher education. We determined participants to have completed their education if they did not subsequently report being enrolled in or completing more years of formal schooling or any higher degrees; however, if there were no more data on education for an individual beyond a given age, we considered them censored and did not include them in analyses beyond that age (with the exception of calculating the average years of education at each age, for which we used a last observation carried forward method to maintain a consistent sample for year-to-year comparisons). Race/Ethnicity We categorized NLSY79 respondents by race/ethnicity according to the primary origin with which they identified: Black, Hispanic, Asian/Pacific Islander, and White/ other, a group which is majority White but also includes responses of "other," "American," and "Native American" (the NLSY is unable to distinguish between those with Native American/American Indian heritage and those who may have misinterpreted the response choices as referring to being born in the USA, which resulted in much larger than expected proportion of the sample labeled as such (NLS 2016)). NLSY97 respondents were more carefully classified by NLSY into 6 categories, including American Indian/ Alaska Native (n = 60) and mixed race (non-Hispanic) (n = 83). For some of the racial/ethnic comparisons across the two cohorts, we used only White/other, Black, and Hispanic participants due to small sample sizes in the other groups. Other Variables For the NLSY79 cohort, several questions in the initial 1979 interview referred to participants at age 14, including area of residence (south/non-south, urban/rural/farm), whether parents/guardians worked for pay, and whether any household members received newspapers, magazines, or had access to a library card. Respondents also reported foreign languages spoken at home during childhood, whether they and their parents were born in the USA, and the number of years of education of each parent. In 1994, they reported whether they had attended Head Start or any other preschool. In 2012, participants were asked whether they had experienced, during childhood, living with someone with a mental illness, living with an alcoholic, and/or being physically abused. Initial interview questions in the 1997 cohort included parental education, parental employment, region of residence, the number of places lived before age 12, whether the mother or both parents were on the child's birth certificate, whether the child had attended Head Start, and whether s/ he had been in child care for more than 20 h/week. Parents/ guardians were also asked if children had gone through any "hard times"; examples given were living in a place without water or electricity or in a homeless shelter. Analytic Approach Descriptive statistics were weighted using NLSY custom longitudinal sampling weights for each cohort. Surveyweighted chi-squared tests that accounted for clustering within households (and within primary sampling units in the NLSY97 cohort) were used to compare covariate distributions across educational trajectory groups, defined by the age (25,30,35,40) after which participants did not report completing more education. We graphed years of education attained by each year of age in order to examine differences in trajectories by sociodemographic characteristics. Results from these descriptive analyses are presented in Table 1 and Figs. 1, 2, and 3. In order to better accommodate censoring, we also conducted time-to-event analyses in which we analyzed observations by age at degree completion. We emphasize that these analyses were not intended to isolate causal relationships, but instead to provide additional confirmation of results from our descriptive analyses. We used a multistate framework (Putter et al., 2007) in which all participants begin without a high school degree and remain in that state until they received a diploma or a GED, or were censored. We then allowed them to transition into a state defined by the completion of an Associate's degree or two years of post-high school education, at which point they were considered "at risk" to enter into the final state, attainment Note: The figure in each cell represents the proportion of individuals in that category who had data available beyond the age indicated in the column header and who continued their education beyond that age. For example, data in the first cell of the table indicates that 45.2% of females in the NLSY79 cohort who had data past age 25 continued their education past age 25 Given that there was sample attrition over time in both cohorts, there are smaller sample sizes for these analyses over time: the NLSY79 cohort had 12,034 participants who provided data at age 25, 10,289 who provided data at age 30, 9,125 who provided data at age 35, and 8,713 who provided data at age 40. Similarly, the NLSY97 cohort had 8,229 participants who provided data at age 25 and 5,986 who provided data at age 30. The cohorts were relatively evenly divided by gender (at age 25, 52.2% of the NLSY79 cohort and 47.5% of the NLSY97 cohort were women) and had racial/ethnic diversity (in the NLSY79 cohort, 62.6% were white, 25.8% were Black, 15.9% were Hispanic, and 1.2% were Asian; in the NLSY97 cohort, 53.4% were white, 28.2% were Black, 22.7% were Hispanic, 1.9% were Asian, 1.0% were multiracial, and 0.8% were Native American) Asterisks reflect results from Chi-squared tests for differences between finishers and continuers at each age, adjusted for clustering at household level (NLSY79) or complex sampling design (NLSY97). Since a single comparison may be of particular interest to a reader, we did not conduct any multiple comparisons adjustment when calculating the chi-square tests, but we only report chi-square tests that have a p-value of less than 0.01, a more conservative cut-off that the traditional cut-point of 0.05 *p < 0.01; **p < 0.001 'ggplot2' (Wickham, 2009). Some of these findings are depicted in Fig. 4. To test for differences in the education rates across cohorts, we combined the datasets and fit models using the variables for which comparisons could be made: gender, race/ethnicity, and parental education (there was little overlap in the early life variables). This required recoding the race/ethnicity variable in the NLSY97 to match the NLSY79's four categories. We accounted for householdlevel clustering. We fit multistate regression models, as Years of education refers to the cumulative number of years of education reported by an NLSY respondent at a given age, including both those who did and did not return to education. Due to low numbers of Asians, estimates were imprecise, making comparisons difficult; they were excluded from the figure for ease of comparison between the three larger ethnic groups 1 3 above, to the combined complete-case dataset, this time including a main cohort effect and an interaction term with cohort for all the variables in the models. We note that we thought it was important to consider racial/ethnic inequalities without controlling for socioeconomic factors given that structural racism and other race/ ethnic-related factors can often contribute to SEP, and so SEP may mediate any disparities observed. Results We begin by describing trends in the timing and nature of participants' educational pursuits, both within and between the NLSY79 and NLSY97 cohorts. To provide supplemental confirmatory (or disconfirming) evidence, we then assess the statistical significance of these descriptive trends with time-to-event analyses. Our results suggest two main paths through which educational trajectories extending beyond age 25 can reinforce rather than narrow longstanding inequities according to social background. One path is indicative of cumulative advantage: people whose parents had higher education continue school longer because they are, on average, pursuing more advanced degrees. The other path is indicative of cumulative disadvantage: people from historically disadvantaged backgrounds have lower rates of degree completion across ages. When we stratify by final degree attained, those groups take longer to complete the same degree. For example, for the set of people for whom a bachelor's degree as their highest degree, those from historically disadvantaged backgrounds get a bachelor's degree later in life. These paths remain largely persistent across generations, with some key exceptions. Hazard ratios (with vertical bars indicating 95% CI) from Cox models for time to degree completion in NLSY79 and NLSY97 cohorts. Models 1 and 2 included only the covariates for which estimates are shown (i.e., sex, race/ethnicity, or parental education). Model 3 for the NLSY79 cohort also included as predictors: parental employment, health status as a child, geographic location as a child (south vs. non-south; town/city vs. farm/ranch vs. non-farm country), and childhood access to magazines, newspapers, and a library card. Model 3 for the NLSY97 cohort also included as predictors: number of places lived as a child (< 5 or not), the presence of both parents on the birth certificate, whether the child had gone through hard times, attended child care for > 20 h/ week, attended Head Start, and geographic region (northeast, north central, south, west). All models use multiply imputed data for covariates, with the exception of parental variables, which were not imputed for a parent who was non-resident or unknown Descriptive Trends in the Accumulation of Education Within and Between Cohorts The NLSY79 and NLSY97 cohorts (Table 1) were both evenly divided by gender. A majority of participants were White, but both cohorts also included substantial numbers of Black and Hispanic participants. In both cohorts, a high school or high school equivalent degree was the most common eventual highest degree. Almost half (42%) of our NLSY79 sample continued their education beyond age 25, and 12% continued educational activities past age 40 (Table 1). The extent to which the NLSY79 sample continued their education into adulthood differed by parental education and by degree attained. In the NLSY79 sample, parental education was not only a factor for overall educational attainment, but also for any given final degree attained: individuals whose parents had higher levels of education completed their own education at an earlier age, regardless of final degree. For example, among those whose highest degree was an Associate's and whose father had fewer than 12 years of education, 66% continued past age 25, while 60% of those with an Associate's degree and whose father had 12-16 years of education continued past age 25 (Fig. 1a). Similar patterns held through age 40, as well as for maternal education. A similar pattern was observed for NLSY97 participants. In general, NLSY97 participants whose parents had more education stayed in school for longer (Table 1), and these additional years of education were disproportionately likely to result in a more advanced degree. In contrast, NLSY97 participants whose parents had the least education were most likely to continue education past age 25 to complete the same final degree that participants with more parental education completed at a younger age. For example, among people whose parents had less than a high school education and whose final degree was a bachelors, around 69% were still getting education past age 25, as compared to around 50% of people whose parents had ≥ 16 years of education and whose final degree was a bachelors (Fig. 1b). With increased age, a growing proportion of NLSY79 participants continuing their education sought Associate's degrees (Table 1). Participants whose final degree was an Associate's were more likely to continue schooling at later ages than those whose final degree was a Bachelor's. For example, 34% of people who eventually received Associate's degrees continued education past age 35, but only 23% of those whose eventual degree was a Bachelor's persisted past age 35. In general, a larger proportion of women than men continued their educational activities after age 25 (54% vs. 46%; Table 1), although differences by sex were primarily driven by the fact that Black and Hispanic/Latina women continued their education through later ages (data not shown). Similarly, in the NLSY97 cohort, a higher proportion of women than men continued education past age 25 (47% compared to 36%); the same was true at age 30 (20% vs. 14%). In this more recent cohort, however, only a slightly higher proportion of Black women continued past age 30 (19%) relative to White and Hispanic/Latina women (16% and 16%, respectively). Differences in educational attainment by race/ethnicity began to emerge when NLSY79 study participants were in their early twenties and were statistically significantly distinct by age 22 (Fig. 2). At every age after 18, Asian and White participants had more education, whether considering years of education or degrees attained, than Black and Hispanic participants (Table 1). Overall, White people whose terminal degree was an Associate's or a Bachelor's were more likely than other racial/ethnic groups to complete their education by age 25 (Fig. 1). Further, although some subset of participants from all racial and ethnic groups continued to accumulate education throughout follow-up, with each progressive age beyond 18, White participants were most likely to be completing advanced degrees while Black and Hispanic participants were most likely to continue pursuing an Associate's or Bachelor's degree. Despite the continued accumulation of education throughout follow-up, these gaps persisted. Like the 1979 cohort, in the NLSY97 cohort, White participants completed their degrees earlier than Black or Hispanic participants. Despite overall patterns in educational trajectories according to race and sex, there were some important distinctions between the NLSY79 and NLSY97 cohorts (Fig. 3). In particular, the typical trajectories of Hispanic participants changed considerably, although the nature of these changes differed according to sex. Although Black women still attained more education than Hispanic women in both cohorts, the Black-Hispanic gap partially closed over time. And while Black men in the NLSY79 cohort attained more education than Hispanic men, the reverse was true in the NLSY97 cohort. Additionally, the White-Hispanic gap in years of education decreased slightly for both men and women between the 1979 and 1997 cohorts, although this gap remained relatively large-approximately one year of education or more. When we tested for differences in educational attainment across the two cohorts, Asian and Hispanic participants of the 1997 cohort increased their education completion rate relative to White participants, but both Black participants and men appeared to have slowed compared to White people and women, respectively (interaction p < 0.001 for race/ ethnicity; interaction p = 0.002 for gender). Testing Descriptive Trends in Attainment Disparities Over Time The descriptive patterns above, by which participants attained degrees at earlier and earlier ages with increasing levels of parental education, were confirmed in the timeto-event analyses. 3 These differences persisted even when accounting for sex, race/ethnicity, and a variety of baseline factors that we hypothesized could influence educational attainment. Figure 4 shows the hazard ratios for each cohort from a Cox model with gender and race/ethnicity as predictors. Models that adjusted for race/ethnicity, gender, and a number of early-life factors demonstrate the forceful influence of parental educational advantage (data available upon request). Further, our testing for differences across cohorts confirmed that the influence of parental education remained essentially unchanged over time (interaction p = 0.288 for maternal education; interaction p = 0.544 for paternal education). Turning to the role of race/ethnicity and sex in educational attainment disparities beyond age 25, a direct comparison between the NLSY79 and NLSY97 cohorts similarly confirmed the inequalities at every age that were evident in the descriptive analyses. Although all race/ethnicity-gender subgroups increased educational attainment from the 1979 cohort to the 1997 cohort except Black men, subgroup educational attainment increased at different rates, leading to the perpetuation of most racial/ethnic and gender inequalities described above in both the timing and extent of educational attainment (Fig. 3). In fact, hazard ratios from the time-toevent analyses were almost identical across the two cohorts (Fig. 4). Notable is that Black-White inequalities in educational trajectories appeared to widen between the NLSY79 and NLSY97 cohorts for the ages at which both cohorts have data (up to 34 years old). Strengths and Limitations of this Research Our study had several strengths. First, to the best of our knowledge, this is one of the longest follow-up periods for studying educational attainment trajectories, which enabled us to more fully understand how educational attainment evolves over the lifespan. Second, we could both compare across educational transitions within a cohort over 30 years, and also across two recent US cohorts from different generations. Third, NLSY cohorts are nationally representative, increasing our study's generalizability. Fourth, we could look at trends among three major racial/ethnic groups in the USA: White people, Black people, and Hispanic people. Our study also has limitations. First, the NLSY97 cohort seems less likely to pursue education later in adulthood compared to the NLSY79 cohort, but that may be an artifact of a shorter follow-up period. It may be that some NLSY97 participants return to school in future years. We look forward to the continued follow-up of these two cohorts to help deepen our understanding of educational trajectories across the lifespan. Second, given that the goal of this research was to describe patterns in educational trajectories beyond age 25 rather than to fully explain differences in these patterns across groups, we do not include an exhaustive set of control covariates in our analyses. As we know from prior research, a host of individual attributes are correlated with the key markers of ascribed social status that we include, especially parent education. It will be important for future analyses to further explore how attributes observed at different life stages, ranging from childhood cognitive skills to later marriage and parental status, influence educational trajectories in ways that may be correlated with parental education, race/ ethnicity, and gender (see Grodsky et al., 2021 for a recent example of such research). Third, we were limited in only being able to focus on the three major racial/ethnic groups, and not having the statistical power to further distinguish between finer classifications of race/ethnicity. We also note that there is a further level of potential inequality not explored in this paper: horizontal inequalities in the selectivity of the higher education institution attended (Mullen et al., 2003;Perna, 2000). Disparities in the eliteness of the institution could further maintain (Lucas, 2001;Raftery & Hout, 1993) and/or expand inequality (Alon, 2009). We encourage the reader to keep these strengths and limitations in mind when considering the discussion of our results below. Discussion The vast majority of studies investigating participation in, persistence through, and consequences of postsecondary education have focused on the so-called traditional population of collegegoers between the ages of 18 and 24. This remains the case even as the non-traditional undergraduate population in the United States has expanded significantly since the mid-1970s (Chen, Ziskin and Torres 2020;Haas & Hadjar, 2020). This narrow focus leaves largely invisible the role that an expanding set of educational trajectories throughout adulthood play in shaping social stratification. Through careful descriptive analysis, our study makes this role more visible by examining relationships between sociodemographic characteristics long shown to confer cumulative advantages across the life course, on the one hand, and educational trajectories, on the other hand, as well as how these relationships have changed over time. Educational Attainment and Social Stratification Many researchers have examined socioeconomic disparities in educational opportunities and educational attainment from kindergarten through college (Berliner, 2006;Darling-Hammond, 2004;Duncan & Murnane, 2011;Engle & Black, 2008;Reardon & Portilla, 2016). More recent work identifies graduate and professional education as a site of persistent stratification (Posselt & Grodsky, 2017). We add to this literature by documenting that many continue to pursue formal educational opportunities after age 25 (when the US Census and others typically assume educational attainment to cease), and that inequalities in educational attainment widen once schooling is no longer mandatory. By examining individual educational trajectories over many decades and across generations, we find that education during adulthood is playing a more important role for social mobility and social reproduction than previously understood. Moreover, school continuation at later stages does not appear independent of social background. This is largely because Black and Hispanic people and people with low parental education take longer to attain the same degrees that White people secure much earlier in life. In fact, the results from the Cox models for parental education for both cohorts were consistent and barely changed with the inclusion of other baseline variables. Consistent with the concept of cumulative (dis)advantage, people in both the NLSY79 and NLSY97 cohorts whose parents completed more education were more likely to continue their education and to earn more advanced degrees well into adulthood, evidencing pronounced and persistent educational inheritance and socioeconomic inequalities in educational attainment. Overall, Asian and Hispanic participants of the 1997 cohort increased their education completion rate relative to White participants, but both Black participants and men appeared to have slowed compared to White people and women, respectively. More nuanced patterns emerged when considering race and gender simultaneously. White men, who had roughly equal amounts of educational attainment as White women in the NLSY79 cohort, had much less education than White women in the NLSY97 cohort and Black women attained almost as much education as White men in the NLSY97 cohort. Black women still attained more education than Hispanic women in both cohorts, but the Black-Hispanic gap partially closed over time, while Hispanic men in the NLSY97 cohort were attaining more education than Black men, a reversal from NLSY79 trends. In both NLSY cohorts, women and people of color were more likely to continue their formal education at later ages than individuals from more historically advantaged groups. However, among those who return to or continue education after 25, White people were more likely to complete Bachelor's degree or higher, while Black and Hispanic people were more likely to end with Associate's degrees. This is especially consequential for the Millennial (NLSY97) generation given a sharper divide in the economic status of Millennials who do and do not have a college education relative to any prior generation (Bialik & Fry, 2019), particularly between those with a bachelor's or advanced degree versus those with less education. Implications for Future Research As Dannefer (2018) observes, the phenomenon of CDA finds its footing in a set of generative social dynamics that tend to go unrecognized: patterns of increasing inequality are easier to spot than the underlying processes that yield these patterns. Contributing to this opacity, an individual's structural position within a social system-although intertwined with socially consequential characteristics such as socioeconomic position, race/ethnicity, and gender-constitutes a socially generative force that can operate independently of individual characteristics. The pursuit of educational attainment over an extended timeline may serve as one underlying process of cumulative disadvantage insofar as the operation of social systems such as educational and market institutions reward normative, temporally organized schedules of attainment (O'Rand, 2002). Precocious and on-time educational attainment is typically associated with more economic opportunities, including higher-paying jobs (Angrist & Krueger, 1991;Hout, 2012), and people can incur opportunity costs by not obtaining higher-paying jobs until later in life (DiPrete & Eirich, 2006;Elman & O'Rand, 2004). Other outcomes can also be affected; for example, a rich literature links higher educational attainment to better health (Cohen & Syme, 2013), with differential returns for some marginalized groups (Vable et al., 2018). There may also be meaningful disruptions to intended educational trajectories, like marriage, parenthood, and/or other caregiving responsibilities. These considerations take on ever greater significance given that the delayed pursuit of education is common among women and members of minoritized racial/ethnic and/or socioeconomic groups, people who also encounter greater barriers to traditional educational trajectories (Grodsky & Jones, 2007;Pérez & McDonough, 2008). Our combined findings suggest substantial cumulative disadvantage within and across cohorts. At the same time, some of the differences we observe across generations hint at the potential for policy and program interventions within temporally organized school and labor market institutions (Elman & O'Rand, 2004), as well as shifting social and cultural contexts, to redress these inequities. Several directions for research emerge from these findings, including the need for 1 3 research to assess how the type, timing, and sequence of educational attainment perpetuates inequities within and across generational cohorts, as well as research to understand the structural forces that may lead these educational inequities to manifest. One research direction is toward policy analysis that uses a long-range view to shed light on how broad policy reforms may condition educational trajectories in ways that are consequential for particular groups. Importantly, the operation of inherent systemic dynamics that produce tendencies toward CDA does not exclude the possibility that factors outside the system will influence CDA, in either direction (Dannefer, 2018). Indeed, we observe that the educational trajectories of White women further diverged from their Hispanic and Black peers between the NLSY79 and 97 cohorts during a period which, despite increasing racial and ethnic diversity, White women were among the greatest beneficiaries of affirmative action (Hall, 2015). Meanwhile, Black men are the only group for which we observe a waning trajectory and decrease in attainment over a time period that coincides with the mass incarceration of this population. While, importantly, we cannot ascertain the causal order of these phenomena given the descriptive nature of this research, our results illustrate the value of rich descriptive analysis for lifting out trends that might otherwise go unnoticed, as well as the value of comparing these trends over time. Especially insofar as opportunities to obtain levels of education ebb and flow with changing social conditions (Müller & Karle, 1993;Raftery & Hout, 1993), it is crucial to account for the exogenous forces that disrupt long-term profiles of school attendance within and across cohorts (Roksa & Velez, 2010;Weiss & Roksa, 2016). Intergenerational analyses are arguably key given that such interventions, even when successful in modifying outcomes for one generation, may do little to disrupt the underlying systemic tendencies that generate CDA (Dannefer, 2018). We encourage researchers to continue conducting detailed intersectional analyses to more fully understand patterns by race and sex, and to explore not only which interventions may be particularly beneficial for those who are historically disadvantaged but also the extent to which such interventions prove durable across generations. Another direction for research is toward a more developed understanding of the structural forces that lead to the accumulation of disadvantage across the life course, and across lifetimes. Haas and Hadjar (2019) emphasize conceptual parallels between life-course sociology (Crosnoe & Benner, 2016;Settersten & Mayer, 1997) and research on trajectories in higher education (Milesi, 2010) and the potential value in their combination for understanding how variations in educational attainment trajectories are shaped by micro-, meso-, and macro-level processes. Identifiable mechanisms and processes embedded in everyday social life across system levels, from micro to macro, give rise to cumulative (dis)advantage (Dannefer, 1987(Dannefer, , 2003Elias & Feagin, 2016;Pallas & Jennings, 2009). In particular, microlevel dynamics constitute the most fundamental level at which CDE processes operate given their significance in shaping an individual's characteristics, identity, and sense of agency in the world (Dannefer, 2018). Such processes contribute to organizational narratives that condition access to resources and opportunities as individuals move through educational institutions across time (Holstein & Gubrium, 2000). 4 Finally, we point to a need for research that interrogates the degree to which the effects of programmatic or policy interventions in childhood endure and for whom across the midlife years, particularly given the lack of research focus on this period relative to early-life effects (Dannefer, 2018). For instance, high-quality early childhood education is associated with increased and "on-time" educational attainment (Deming, 2009;Heckman et al., 2010); our study similarly found that a higher proportion of those who participated in Head Start attained education at all ages than those who did not. Less clear is the extent to which such effects are contingent upon the nature of resource allocation across the years of midlife according to individuals' structural positions (Dannefer, 2018). Research in each of these directions must necessarily draw on a range of theoretical and disciplinary perspectives. Although a wide range of research addresses educational inequality at key junctions along the pathway into and through undergraduate education (Brint & Karabel, 1989;Coleman et al., 1966;Contreras, 2011;Gamoran, 1987), we argue that our findings on late-stage educational attainment are not sufficiently explained by any single theory to date. Some components of some theories are useful, including classical educational transitions (Mare, 1980), the neo-classical response to purported late state egalitarianism (Alon, 2009;Lucas, 2001;Raftery & Hout, 1993), and perhaps especially the cumulative (dis)advantage perspective on educational inheritance and status attainment within temporally organized schools and market institutions (DiPrete & Eirich, 2006;Elman & O'Rand, 2004;Merton, 1988). Recent research (Grodsky et al., 2021) provides an example of the kind of scholarship needed to advance theory on educational trajectories across the life course. These authors build from CDA to propose a theory of "staged advantage," based on the premise that the intersection of life-course events and educational trajectories as a cohort ages may produce varied patterns of relative advantage and disadvantage at different stages. Conclusion In both the NLSY79 and 97 cohorts, which in turn correspond to the Baby Boomer and Millennial generations, racial/ethnic inequalities in educational attainment grew from adolescence into adulthood, and socioeconomic inequalities also were more pronounced for more advanced degrees and at later ages. While many institutions of higher education remain focused on the stereotypical student of decades past who is straight out of high school, our detailed descriptive analyses suggest that educational trajectories well into adulthood are serving as yet another overlooked process through which advantages and disadvantages differentially accumulate in timeworn patterns across groups. Data Availability The analyses were conducted using data that are publicly available. Code Availability Custom code may be shared upon reasonable request. Conflict of interest The authors have no conflicts of interest to disclose. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-12-19T17:09:09.585Z
2021-12-16T00:00:00.000
{ "year": 2021, "sha1": "b7c0027b4a6e2e93cc3888408ce094b360e63837", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12552-021-09352-1.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "d695c4cc68240076a9509c4c9e9667194e844064", "s2fieldsofstudy": [ "Sociology", "Education" ], "extfieldsofstudy": [] }
91179255
pes2o/s2orc
v3-fos-license
Synthesis of μ-ABC Tricyclic Miktoarm Star Polymer via Intramolecular Click Cyclization Cyclic polymers exhibit unique physical and chemical properties because of the restricted chain mobility and absence of chain ends. Although many types of homopolymers and diblock copolymers possessing cyclic architectures have been synthesized to date, there are relatively few reports of cyclic triblock terpolymers because of their synthetic difficulties. In this study, a novel synthetic approach for μ-ABC tricyclic miktoarm star polymers involving t-Bu-P4-catalyzed ring-opening polymerization (ROP) of glycidyl ethers and intramolecular copper-catalyzed azido-alkyne cycloaddition (CuAAC) was developed. First, the t-Bu-P4-catalyzed ROP of decyl glycidyl ether, dec-9-enyl glycidyl ether, and 2-(2-(2-methoxyethoxy) ethoxy) ethyl glycidyl ether with the aid of functional initiators and terminators was employed for the preparation of a clickable linear triblock terpolymer precursor possessing three azido and three ethynyl groups at the selected positions. Next, the intramolecular CuAAC of the linear precursor successfully produced the well-defined tricyclic triblock terpolymer with narrow dispersity in a reasonable yield. The present strategy is useful for synthesizing model polymers for studying the topological effects on the triblock terpolymer self-assembly. Introduction Block copolymers (BCP) consisting of more than two different polymer segments (or blocks) have attracted considerable attention for their unique self-assembly properties such as microphase-separation and micellization [1][2][3]. It is well known that the molecular weight, volume fraction of each constituting block, and incompatibility between the blocks determine the dimension and morphology of the resulting self-assembled nanostructure. In addition to these classical structural parameters, macromolecular architectures such as star, comb, and cyclic polymer structures, have recently been recognized as an important factor affecting the BCP self-assembly behavior [4]. Several pioneering studies have indicated that cyclic diblock copolymers display unique self-assembly behaviors in both solution and solid states. For example, Tezuka et al. discovered that amphiphilic cyclic poly(ethylene oxide)-b-poly(butyl acrylate) formed micellar aggregates with greater thermal stability than the corresponding linear counterpart [5]. Hawker et al. reported that cyclic polystyrene-b-poly(ethylene oxide) self-assembled into a hexagonally close-packed cylindrical structure with smaller domain-spacing in the thin film state compared to its corresponding linear counterpart [6]. Thus, further studies relating to cyclic polymer synthesis and self-assembly are highly desired. Triblock terpolymers consisting of three different polymer segments also self-assemble in both solution and solid states, producing much more complex and diversified nanostructures than those created by diblock copolymers. For example, Noda et al. investigated the morphology of polyisoprene-b-polystyrene-b-poly (2-vinylpridine) in bulk and discussed the variation in morphologies depending on the composition [7,8]. Moreover, Müller et al. investigated the solution state self-assembly of linear triblock terpolymers to produce various micellar morphologies, such as three-layer core-shell-corona spheres, footballs, and hamburgers [9][10][11] Thus, the combination of a triblock terpolymer system with branched and cyclic architectures is exciting because nanostructures with a variety of novel morphologies and functions are created. Indeed, Dotera et al. simulated the morphology of µ-ABC miktoarm star polymers and reported new self-assembled structures that could not have been attained from diblock copolymers or linear triblock terpolymer counterparts [12]. In addition, Matsushita et al. found that µ-ABC miktoarm star polymers constructed an Archimedean tiling pattern in the bulk [13]. Ree et al. also found a complex three-phased hexagonal morphology in the asymmetric nine-arm star polymer, (polystyrene) 3 -b-(poly(4-methoxystyrene)) 3 -b-(polyisoprene) 3 [14]. These self-assembled structures in thin films can be used for lithographic templates for fabricating complex nanopatterns [15,16]. In contrast, triblock terpolymers with cyclic architectures have received scant attention, which is mainly because of their synthetic inaccessibility. In order to study the correlation between the cyclic architecture and self-assembled nanostructures in a triblock terpolymer system, establishing a facile synthetic route toward the architecturally complex triblock terpolymers with well-defined molecular weight and composition is crucial. Some examples of triblock terpolymers with cyclic architectures are the macrocyclic ABC triblock terpolymer and µ-ABC tricyclic miktoarm star polymer. Hadjichristidis et al. successfully synthesized a macrocyclic ABC triblock terpolymer via intramolecular Glaser coupling of a poly(isoprene-b-styrene-b-2-vinylpyridine) linear precursor [17]. The authors observed a significant influence of the cyclic architecture on the terpolymer microphase separation. Monteiro et al. reported the first synthesis of the µ-ABC tricyclic miktoarm star polymer, having macrocyclic polystyrene, poly(t-butyl acrylate), and poly(methyl acrylate) units [18]. In this synthesis, three different macrocyclic units possessing a reactive functional group were synthesized by the copper-catalyzed azido-alkyne cycloaddition (CuAAC) [19,20], and then combined via CuAAC and nitroxide radical coupling (NRC) reaction [21] to form the tricyclic structure (Scheme 1a). Although this strategy is highly sophisticated, a challenge still exists in establishing a new strategy for the µ-ABC tricyclic miktoarm star polymer without using the intermolecular coupling reaction. In this study, a novel synthetic approach toward µ-ABC tricyclic miktoarm star polymers consisting of a polyether backbone based on intramolecular click cyclization has been proposed (Scheme 1b). The most important feature of the proposed approach is that the three multicyclic units can be constructed in a single click reaction step. In previous studies, figure-eight-, trefoil-, and quatrefoil-shaped block copolymers were synthesized via the intramolecular click reaction [22,23]. Thus, the present strategy should be highly feasible as long as a well-defined clickable precursor can be synthesized. For the synthesis of the clickable precursor, t-Bu-P 4 -catalyzed ring-opening polymerization (ROP) of glycidyl ethers was employed as it enabled a precise control over the end group structure and molecular weight. Scheme 2 describes a detailed synthetic pathway for constructing a µ-ABC tricyclic miktoarm star polymer (P9) consisting of cyclic units of poly(decyl glycidyl ether) (M1), poly(dec-9-enyl glycidyl ether) (M2), and poly[2-(2-(2-methoxyethoxy) ethoxy) ethyl glycidyl ether] (M3). The linear triblock terpolymer possessing three azido groups and three ethynyl groups (P8) was synthesized by combining t-Bu-P 4 -catalyzed ROP and ω-end functionalization. It was then subjected to intramolecular click cyclization to produce P9. To the best of our knowledge, this is the first example of the construction of a µ-ABC tricyclic miktoarm star polymer via intramolecular coupling. the first synthesis of the μ-ABC tricyclic miktoarm star polymer, having macrocyclic polystyrene, poly(t-butyl acrylate), and poly(methyl acrylate) units [18]. In this synthesis, three different macrocyclic units possessing a reactive functional group were synthesized by the copper-catalyzed azido-alkyne cycloaddition (CuAAC) [19,20], and then combined via CuAAC and nitroxide radical coupling (NRC) reaction [21] to form the tricyclic structure (Scheme 1a). Although this strategy is highly sophisticated, a challenge still exists in establishing a new strategy for the μ-ABC tricyclic miktoarm star polymer without using the intermolecular coupling reaction. In this study, a novel synthetic approach toward μ-ABC tricyclic miktoarm star polymers consisting of a polyether backbone based on intramolecular click cyclization has been proposed (Scheme 1b). The most important feature of the proposed approach is that the three multicyclic units can be constructed in a single click reaction step. In previous studies, figure-eight-, trefoil-, and quatrefoil-shaped block copolymers were synthesized via the intramolecular click reaction [22,23]. Thus, the present strategy should be highly feasible as long as a well-defined clickable precursor can be synthesized. For the synthesis of the clickable precursor, t-Bu-P4-catalyzed ring-opening polymerization (ROP) of glycidyl ethers was employed as it enabled a precise control over the end group structure and molecular weight. Scheme 2 describes a detailed synthetic pathway for constructing a μ-ABC tricyclic miktoarm star polymer (P9) consisting of cyclic units of poly(decyl glycidyl ether) (M1), poly(dec-9-enyl glycidyl ether) (M2), and poly[2-(2-(2-methoxyethoxy) ethoxy) ethyl glycidyl ether] (M3). The linear triblock terpolymer possessing three azido groups and three ethynyl groups (P8) was synthesized by combining t-Bu-P4-catalyzed ROP and ω-end functionalization. It was then subjected to intramolecular click cyclization to produce P9. To the best of our knowledge, this is the first example of the construction of a μ-ABC tricyclic miktoarm star polymer via intramolecular coupling. and 0.78-0.95 (t, -OCH 2 CH(CH 2 OCH 2 (CH 2 ) 8 Results and Discussion Synthesis of diazido-hydroxyl poly(M1) (P3). As the first step of the synthetic route, diazido-hydroxyl poly(M1) (P3) was synthesized in three steps, namely, the polymerization of decyl glycidyl ether (M1) with 6-azido-1-hexanol (I1), end group modification with 1-(((1-azido-3-(1-ethoxyethoxy)propan-2-yl)oxy)-methyl)-4-(bromomethyl)benzene (T1), and deprotection of the ethoxyethyl group (Scheme 2). Following a previous report [22], the t-Bu-P 4 -catalyzed ROP of M1 using I1 as an initiator was carried out at the [M1] 0 /[I1] 0 /[t-Bu-P 4 ] ratio of 33/1/1 to produce azido poly(M1) (P1; M n,NMR = 7000 g·mol −1 , degree of polymerization for block: DP 1 = 33, molecular weight dispersion: Đ = 1.03) in 56.5% isolated yield. The 1 H NMR spectrum of P1 showed the characteristic signals corresponding to the poly(M1) backbone along with minor signals of the initiator residue, such as the methylene groups adjacent to the azido groups (A: 3.26 ppm in Figure 1d), verifying that the ROP of M1 was initiated from I1. The number-average molecular weight determined from NMR analysis (M n,NMR ) of P1 was in good agreement with the M n value (M n,theo ) calculated by the monomer conversion and the initial monomer-to-initiator ratio (M n,theo = 7220) ( Table 1). Next, P1 was treated with an excess amount of T1 in the presence of sodium hydride to obtain diazido poly(M1) (P2; M n,NMR = 7,950 g·mol −1 , DP 1 = 33, Đ = 1.03). After ω-end functionalization, the ethoxyethyl group of P2 was deprotected under acidic conditions to give diazido-hydroxyl poly(M1) (P3; M n,NMR = 7810 g·mol −1 , DP = 33, Đ = 1.03). After thorough screening of the deprotection conditions, it was found that the cation exchange resin (DOWEX ® hydrogen form) was the best suited for a clean reaction without undesired side reactions. There was no significant difference between the SEC traces of the P1, P2, and P3, suggesting the absence of any side reactions (Figure 2a The SEC trace of P3 shifted to the higher molecular weight region after polymerization (Figure 2a), which confirmed that the polymerization reaction was initiated from the hydroxyl group of P3. The 1 H NMR spectrum of P3 showed characteristic signals of both the poly(M2) and poly(M1) backbones, verifying successful post polymerization (Figure 2d). In a similar fashion to the synthesis of P3, P4 was treated with T1 in the presence of sodium hydride, and the ethoxyethyl group was deprotected under acidic conditions to give triazido-hydroxyl poly(M1)-b-poly(M2) (P6). It should be noted that a non-negligible amount of a lower molecular weight byproduct was observed in the SEC trace of crude P6 (Figure 2c). It was expected that the residue of macroinitiator P3, which was not completely deprotected, would correspond to this shoulder peak. According to the SEC profile, 5.0% of the macroinitiator did not participate in the polymerization reaction (Figure 2a). To remove the unreacted macroinitiator, the crude product was subjected to preparative SEC and pure P6 was isolated in 34.5% yield (Mn,NMR = 14,400 g·mol −1 , DP1/DP2 = 33/33, Đ = 1.04) ( Table 2). Synthesis of triazido-hydroxyl poly(M1)-b-poly(M2) (P6). After rigorous dehydration, P3 was utilized as a macroinitiator for the synthesis of poly(M1)-b-poly(M2) (P4; M n,NMR = 14,600 g·mol −1 , DP 1 /DP 2 = 33/33, Đ = 1.10). The t-Bu-P 4 -catalyzed ROP of dec-9-enyl glycidyl ether (M2) with P3 macroinitiator was carried out at the [M2] 0 /[P3] 0 /[t-Bu-P 4 ] ratio of 33/1/1 to obtain P4 in 84.7% yield. The SEC trace of P3 shifted to the higher molecular weight region after polymerization (Figure 2a), which confirmed that the polymerization reaction was initiated from the hydroxyl group of P3. The 1 H NMR spectrum of P3 showed characteristic signals of both the poly(M2) and poly(M1) backbones, verifying successful post polymerization (Figure 2d). In a similar fashion to the synthesis of P3, P4 was treated with T1 in the presence of sodium hydride, and the ethoxyethyl group was deprotected under acidic conditions to give triazido-hydroxyl poly(M1)-b-poly(M2) (P6). It should be noted that a non-negligible amount of a lower molecular weight byproduct was observed in the SEC trace of crude P6 (Figure 2c). It was expected that the residue of macroinitiator P3, which was not completely deprotected, would correspond to this shoulder peak. According to the SEC profile, 5.0% of the macroinitiator did not participate in the polymerization reaction (Figure 2a). To remove the unreacted macroinitiator, the crude product was subjected to preparative SEC and pure P6 was isolated in 34.5% yield (M n,NMR = 14,400 g·mol −1 , DP 1 /DP 2 = 33/33, Đ = 1.04) ( Table 2). . The success of the extension of a poly(M3) block from the macroinitiator P6 was confirmed by the fact that the elution peak maximum of P6 shifted to the higher molecular weight region (Figure 3a). However, the SEC trace of the obtained crude product P7 showed a non-negligible amount of higher and lower molecular weight byproducts, along with the main product. The population of the higher and lower molecular weight byproducts was calculated to be 4.4% and 15.2%, respectively, based on the SEC elution peak area. The low molecular weight byproduct corresponded to the poly(M3) homopolymer that was produced by the ROP of M3 initiated from water contaminant in the monomer or macroinitiator. On the other hand, the high molecular weight byproduct was possibly one of the intermolecularly cross-linked products formed through the reaction between the growing oxyanion and side chain olefin [30]. Thus, the crude product was purified by preparative SEC and pure P7 was isolated in 80.4% yield (Mn,NMR = 19,600 g·mol −1 , DP1/DP2/DP3 = 33/33/25, Đ = 1.03) ( Table 3) Synthesis of triazido-triethynyl poly(M1)-b-poly(M2)-b-poly(M3) (P8). In a similar fashion to the synthesis of P4, the t-Bu-P 4 -catalyzed ROP of 2-(2-(2-methoxyethoxy) ethoxy) ethyl glycidyl ether (M3) was carried out at the [M3] 0 /[P6] 0 /[t-Bu-P 4 ] ratio of 33/1/1 using P6 as a macroinitiator for the synthesis of poly(M1)-b-poly(M2)-b-poly(M3) (P7). The success of the extension of a poly(M3) block from the macroinitiator P6 was confirmed by the fact that the elution peak maximum of P6 shifted to the higher molecular weight region (Figure 3a). However, the SEC trace of the obtained crude product P7 showed a non-negligible amount of higher and lower molecular weight byproducts, along with the main product. The population of the higher and lower molecular weight byproducts was calculated to be 4.4% and 15.2%, respectively, based on the SEC elution peak area. The low molecular weight byproduct corresponded to the poly(M3) homopolymer that was produced by the ROP of M3 initiated from water contaminant in the monomer or macroinitiator. On the other hand, the high molecular weight byproduct was possibly one of the intermolecularly cross-linked products formed through the reaction between the growing oxyanion and side chain olefin [30]. Thus, the crude product was purified by preparative SEC and pure P7 was isolated in 80.4% yield (M n,NMR = 19,600 g·mol −1 , DP 1 /DP 2 /DP 3 = 33/33/25, Đ = 1.03) ( Table 3). After purification, P7 was treated with an excess amount of 5-(bromomethyl)-1,2,3-tris(prop-2-yn-1-yloxy)benzene (T2) in the presence of sodium hydride to obtain triazido-triethynyl poly(M1)-b-poly(M2)-b-poly(M3) (P8; M n,NMR = 19,900 g·mol −1 , DP 1 /DP 2 /DP 3 = 33/33/25, Đ = 1.04). The proton signal corresponding to the terminal alkynes (protons d and e) was observed at 2.44-2.64 ppm in the 1 H NMR spectrum (Figure 3d), and the quantitative introduction of the propargyl groups was verified by comparing the peak areas of the ethynyl protons d and e (2.44-2.64 ppm) with the benzyl proton b (4.45 ppm). Polymers 2018, 10, x FOR PEER REVIEW 9 of 13 d and e) was observed at 2.44-2.64 ppm in the 1 H NMR spectrum (Figure 3d), and the quantitative introduction of the propargyl groups was verified by comparing the peak areas of the ethynyl protons d and e (2.44-2.64 ppm) with the benzyl proton b (4.45 ppm). Synthesis of μ-ABC tricyclic miktoarm star polymer (P9). Finally, the intramolecular multiple click cyclization of P8 was performed to obtain the target μ-ABC tricyclic miktoarm star polymer (P9) using the CuBr/N,N,N′,N″,N″-pentamethyldiethylenetriamine (PMDETA) catalyst system in DMF at 100 °C. To avoid the intermolecular click reaction, the slow addition technique was employed. Thus, the P8 solution in DMF (19.1 mg·mL −1 ) was added slowly to the catalyst solution using the syringe pump at the rate of 0.3 mL·h −1 . After complete addition, the reaction was continued for another 24 h at 100 °C. Finally, an alkyne-functionalized Wang resin was added to the reaction mixture, by which the unreacted P8 and any other possible byproducts possessing azido groups were removed by the click reaction. FT-IR analysis of the crude product obtained after the alumina column revealed the complete disappearance of the azido groups (Figure 4b). Notably, the absorption band corresponding to the side chain vinyl groups remained after the click reaction, which indicated that there was no significant side reaction. The crude product was then subjected to SEC analysis to confirm the progress of the cyclization reaction (Figure 4a). The elution peak maximum of the product was observed in the lower molecular weight region as compared to the linear precursor P8, which strongly supported the expected decrease in the hydrodynamic volume by the intramolecular cyclization reaction. On the other hand, small broad peaks were visible in the higher molecular weight region, which could be attributed to the oligomeric byproducts formed by the intermolecular click reaction. The population of the intramolecularly cyclized product was calculated to be 85.7% based on the SEC elution peak area. Further purification was then performed by the preparative SEC to remove the high molecular weight byproducts, giving pure product in 53.3% yield. The isolated product displayed a unimodal SEC trace with the Đ value of 1.02 (Figure 4a). The ratio between the Mn,SECs at the SEC peak top of P9 and P8, that is, Mn,p(P9)/Mn,p(P8) = <G>, was calculated to be 0.79 (Table 4). In the 1 Synthesis of µ-ABC tricyclic miktoarm star polymer (P9). Finally, the intramolecular multiple click cyclization of P8 was performed to obtain the target µ-ABC tricyclic miktoarm star polymer (P9) using the CuBr/N,N,N ,N",N"-pentamethyldiethylenetriamine (PMDETA) catalyst system in DMF at 100 • C. To avoid the intermolecular click reaction, the slow addition technique was employed. Thus, the P8 solution in DMF (19.1 mg·mL −1 ) was added slowly to the catalyst solution using the syringe pump at the rate of 0.3 mL·h −1 . After complete addition, the reaction was continued for another 24 h at 100 • C. Finally, an alkyne-functionalized Wang resin was added to the reaction mixture, by which the unreacted P8 and any other possible byproducts possessing azido groups were removed by the click reaction. FT-IR analysis of the crude product obtained after the alumina column revealed the complete disappearance of the azido groups (Figure 4b). Notably, the absorption band corresponding to the side chain vinyl groups remained after the click reaction, which indicated that there was no significant side reaction. The crude product was then subjected to SEC analysis to confirm the progress of the cyclization reaction (Figure 4a). The elution peak maximum of the product was observed in the lower molecular weight region as compared to the linear precursor P8, which strongly supported the expected decrease in the hydrodynamic volume by the intramolecular cyclization reaction. On the other hand, small broad peaks were visible in the higher molecular weight region, which could be attributed to the oligomeric byproducts formed by the intermolecular click reaction. The population of the intramolecularly cyclized product was calculated to be 85.7% based on the SEC elution peak area. Further purification was then performed by the preparative SEC to remove the high molecular weight byproducts, giving pure product in 53.3% yield. The isolated product displayed a unimodal SEC trace with the Đ value of 1.02 (Figure 4a). The ratio between the M n,SEC s at the SEC peak top of P9 and P8, that is, M n,p(P9) /M n,p(P8) = <G>, was calculated to be 0.79 (Table 4). In the 1 H NMR spectrum, new signals (a , 4.22-4.34 ppm; c , 5.05−5.26 ppm; d , 7.64-7.94 ppm in Figure 5a,b) assignable to the triazole rings formed by the click reaction appeared, while the signals corresponding to the ethynyl and methylene groups adjacent to the azido groups completely disappeared. As results of SEC, FT-IR, and 1 H NMR analyses all indicated a successful intramolecular click reaction, the product was identified as P9. On the basis of end group analysis by 1 H NMR, the M n,NMR and DP 1 /DP 2 /DP 3 were calculated to be 20,100 g·mol −1 and 33/33/25, respectively. Polymers 2018, 10, x FOR PEER REVIEW 10 of 13 Figure 5a,b) assignable to the triazole rings formed by the click reaction appeared, while the signals corresponding to the ethynyl and methylene groups adjacent to the azido groups completely disappeared. As results of SEC, FT-IR, and 1 H NMR analyses all indicated a successful intramolecular click reaction, the product was identified as P9. On the basis of end group analysis by 1 H NMR, the Mn,NMR and DP1/DP2/DP3 were calculated to be 20,100 g·mol −1 and 33/33/25, respectively. Polymers 2018, 10, x FOR PEER REVIEW 10 of 13 Figure 5a,b) assignable to the triazole rings formed by the click reaction appeared, while the signals corresponding to the ethynyl and methylene groups adjacent to the azido groups completely disappeared. As results of SEC, FT-IR, and 1 H NMR analyses all indicated a successful intramolecular click reaction, the product was identified as P9. On the basis of end group analysis by 1 H NMR, the Mn,NMR and DP1/DP2/DP3 were calculated to be 20,100 g·mol −1 and 33/33/25, respectively. Conclusions A new synthetic strategy for the µ-ABC tricyclic miktoarm star polymer comprising three different cyclic units of polyethers, namely, poly(decyl glycidyl ether), poly(dec-9-enyl glycidyl ether), and poly[2-(2-(2-methoxyethoxy) ethoxy) ethyl glycidyl ether], has been developed. The t-Bu-P 4 -catalyzed ROP of glycidyl ethers was employed for the preparation of a clickable linear triblock terpolymer precursor possessing three azido and three ethynyl groups at the selected positions. The intramolecular multiple click cyclization of the linear precursors successfully produced the well-defined tricyclic triblock terpolymer with narrow dispersity in a reasonable yield. Given the functional group loading capacity of the poly(glycidyl ether), the present strategy can provide model polymers suitable for studying the topological effects on the triblock terpolymer self-assembly. Indeed, the poly(dec-9-enyl glycidyl ether) segment has a reactive olefinic side chain [31] that can be transformed into a variety of functionalities via thiol-ene reaction, epoxidation, hydroboration, and hydrosilylation. Such side chain modification would permit the present terpolymer to self-assemble into three-phase microphase-separated structures, which can be used as templates for constructing complex nanopatterns. Efforts toward the synthesis of a series of triblock terpolymers with various architectures, including linear, star, and cyclic structures, are currently underway in order to comprehensively understand the correlation between the macromolecular architecture and microphase separation in triblock terpolymer systems.
2019-01-09T07:11:50.035Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "21dc7f7341c039016d10b0719bf7176f28f723df", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/10/8/877/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c98de05332010b29f338d4e94caaeb04ba02b115", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
81754554
pes2o/s2orc
v3-fos-license
Outcome of Self-and Planned Extubation in Organophosphate-Poisoned Patients Background: Respiratory failure is the most common cause of morbidity and mortality in organophosphate (OP)-intoxicated patients. We aimed to assess and compare the need for re-intubation and outcome between patients with self-extubation (SE) and planned extubation (PE). Methods: All OP-poisoned endotracheally intubated patients admitted to poisoning ICU were included. The frequency and time of SE, need for re-intubation, and its impact on hospital stay and outcome were assessed. Results: In fifteen patients (48.4%) SE was reported. Need for re-intubation in these patients was more than those who underwent PE (60.0% vs. 37.5%; P = 0.2). Early unplanned SE significantly correlated with occurrence of pulmonary complications (P = 0.04). The rate of aspiration pneumonia was high (80%) in SE cases. Hospital stay was also significantly prolonged in these patients (14.6 vs. 5.4 days, P = 0.04). Conclusion: Planning for on-time weaning/extubation in OP-poisoned patients can prevent unplanned SE and decrease the occurrence of lung complications. are grouped based on their composition into carbamates, organochlorines, and organophosphates [1].The use of pesticides benefits agricultural productivity and public health.Pesticides play a significant role in controlling vector-borne diseases, which are a main public health concern [2].Organophosphates and carbamates are widely used as insecticides which inhibit cholinesterase activity [3]. Respiratory failure frequently occurs after severe organophosphate (OP) insecticide poisoning [4].Most of OP-poisoned patients need tracheal intubation and mechanical ventilation (MV) for respiratory support.Pulmonary complications including bronchospasm, bronchorrhea, respiratory muscle weakness, pulmonary edema, pneumonia, and hypoxia are the most common causes of morbidity and mortality in these patients [5,6].Weakness of the respiratory muscles may last for a long time if acetylcholinestrase (AchE) is irreversibly blocked by OPs. On time weaning and extubation is very important in these patients while unplanned self-extubation (SE) is a serious health care concern and is an indicator of poor quality and safety of care [7].Early SE at inappropriate time may cause respiratory failure and need for re-intubation with its possible complications such as aspiration pneumonia [8]. The re-intubation risk in general intensive care units Objectives World Health Organization defines pesticides as chemical compounds used to kill pests, including insects, rodents, fungi, and unwanted plants.Pesticides tilator by the physician and self-extubation (SE) was defined as the endotracheal tube being removed in an unplanned manner by the patient.We used "DAS extubation guidelines" for criteria of extubation [10].Re-intubation was defined as intubation for two or more times during the same hospitalization period. Pulmonary complications were classified as acute respiratory distress syndrome (ARDS) and pneumonia.Aspiration pneumonia was diagnosed by observation of purulent secretion, fever (oral T > 38.3 °C), lekucytosis, and focal air space fillings in chest radiography.ARDS was defined as hypoxia without response to oxygen therapy, diffuse crackles, bilateral diffuse patchy infiltrations in the chest radiography, and FiO 2 /PaO 2 ratio < 200.We could not assess the pulmonary capillary wedge pressure in our setting.Intermediate syndrome (IS) was defined as a state of muscle paralysis associated with relapse of cholinergic symptoms after recovery of cholinergic crisis [11]. Data collection A self-made questionnaire containing information on the amount of the ingested OP, presenting signs and symptoms, on-arrival vital signs, on-arrival and daily lab tests, treatments given, and the patients' final outcome was filled for every single patient by trained fellows.On-arrival and before and after weaning and extubation respiratory indexes (respiratory rate/venous blood gas (VBG) analyses/O 2 saturation), signs and symptoms of respiratory distress as well as the level of consciousness based on Glasgow coma scale (GCS) were also recorded.Type of extubation (self-versus planned extubation), need for re-intubation, causes and number of re-intubation episodes, hospital stay, duration of MV, complications, and finally, the outcome were investigated and compared between those with planned and self-extubation.If the patients needed re-intubation, the causes and clinical condition of the patients at re-intubation time were documented, as well. Statistical analysis Data was analyzed using statistical package for social sciences (SPSS) software version 18 and by application of Chi-Square, Fisher's exact, and Mann-Whitney U tests for comparison of nonparametric variables and student t test for parametric variables.P values of 0.05 or less were considered to be statistically significant.This study was approved by the Ethics Committee of Shahid Beheshti University of Medical Sciences.Since the patients were intubated, informed consent was taken from their next if keen. Results Of 36 OP-poisoned patients who underwent tracheal intubation and were admitted to ICU, three and two were excluded because of endotracheal tube obstruction and tube displacement, respectively.Finally, 31 cases met our inclusion criteria and were enrolled. (ICUs) varies between 2 to 25 percent [5].Almost 20% of the re-intubations happen within the first 72 hours after extubation [6].Inadequate sedation as well as agitation is a major risk factor for SE.Need for re-intubation is the major determinant of the patient's outcome.Both SE and re-intubation can be followed by serious complications mainly aspiration, laryngeal edema, and increased risk for pneumonia [9].Most of the related studies have been performed in the surgical and general ICUs.We aimed to evaluate the frequency of SE and its failure rate, causes of need for re-intubation, and outcome in the OP-poisoned patients.We also aimed to compare the outcome between the patients who were planned to be extubated and those who self-extubated. Patients and diagnostic inclusion criteria In a prospective cross-sectional survey, all severely OP-poisoned patients older than 14 years who were brought to poisoning emergency department (ED) and had undergone tracheal intubation and been admitted to the adult poisoning ICU of our center between March 2013 and March 2014 were included.In our center, poisoned children and adolescents younger than 14 years are admitted to PICU. Patients younger than 14 years and those with mixed toxicity, toxicity with insecticides other than OPs, accidental unplanned extubation due to tracheal tube displacement, re-intubation due to tube obstruction, and patients with underlying cardiovascular or lung diseases were excluded. Diagnosis was made by positive history of exposure to OPs and development of cholinergic syndrome and was confirmed by decreased butyrylcholinesterase (below lower normal limits or decreased more than 25% compared to the first level available).Serum level of acetylcholinesterase (AchE) was checked on presentation and daily, afterwards, during hospitalization.We differentiated OPs from carbamates by direct observation of the poison package or coverage brought by the patients' family based on the physician request.Unknown cases whose poison sample was unavailable were excluded. Atropine and pralidoxime (2-PAM) were initiated for all patients at ED.All patients underwent gastric aspiration and washing with normal saline, received a single dose of charcoal (1 g/kg) via nasogastric tube, and were admitted to toxicology ICU.In ICU setting, all patients had physical restraint and received intravenous midazolam with the dose 2 to 5 mg and fentanyl with the dose 25 to 50 µg as needed every 4 to 6 hours to control agitation. Mean VBG values of the patients who underwent re-intubation were as follow: PH: 7.39 ± 0.8 (range; 7. 23 [12] and respiratory acidosis, severe tachypnea and hyperventilation associated with respiratory distress, and severe resistant hypoxia were the main causes of re-intubation in 5, 2, and 12 patients.Nineteen of 31 patients received sedation (5 were on fentanyl, 8 were on midazolam, and 6 were on both).Mean doses of fentanyl and midazolam were 70 ± 29 µg/q4h and 4.1 ± 1.2 mg/q4h, respectively, but almost 50% of our cases extubated themselves. Mean hospital stay was 11.2 ± 7.1 (range; 2.6 to 30) days and mean duration of MV was 6.7 days.Total death rate was 16.1% (5 patients) with no significant difference between those with self-extubation and those with planned extubation (P = 0.57); however, such a significant difference was observed between those who were re-intubated and those who did not (P = 0.011; Table 2).Mean dose of atropine used at ED was 5.18 ± 8.96 mg.The administered 2-PAM was not significantly correlated with neither intubation period (P = 0.23, r = 0.22) nor hospitalization period (P = 0.16, r = 0.25).Figure 1 and Table 3 show serum AchE changes in all cases within the period of hospitalization.Mean serum level of AchE was 1915 ± 277 U/L (range; 195 to 5358) on arrival. Mean age was 33.8 ± 19.1 (range; 13-77) years with a male to female ratio of 2 to 1. Suicide attempt by oral ingestion was the cause of poisoning in all cases.Mean amount of the ingested poison was 124 mL (range; 10-500).Mean time elapsed between ingestion of the poison and ED presentation was 2.7 hours (range; 1 to 8).Ten cases (33%) underwent airway intubation in the first 4 hours of presentation and the remainder was intubated 4 to 8 hours post ED presentation.Pneumonia, IS, and ARDS occurred in 22 (70.9%),five (16.1%), and one (3.2%)patients, respectively. Self-extubation occurred in 15 cases (48.4%), twelve of whom underwent re-intubation within the first 24 hours post extubation.Mean GCS at the time of SE was 11.8.Nine of these patients needed re-intubation because of respiratory failure.Causes of re-intubation were similar between those who self-extubated and those who were planned to be extubated.Patients who were re-intubated were not significantly different from those who were not in terms of age, gender, ingested dose, and serum AchE.Table 1 shows the patients' outcome based on type of extubation. Fifteen cases (48.4%) underwent re-intubation.After extubation, 17 (54.8%)and 25 (80.6%)still needed to receive atropine and 2-PAM.Re-intubation was statistically associated with pulmonary complications (P = 0.05) and only two patients with re-intubation remained without complications.On the other hand, in 8 patients who experienced complications, re-intubation happened once, twice and three times, respectively (P Risk factors of extubation failure and re-intubation Re-intubation is accompanied by a 5-fold mortality and 2-fold longer hospitalization period [6].In our study, nearly 60% of those with SE underwent re-intubation, an almost 2-fold rate compared to those with planned extubation (37%).Multiple re-intubations may lead to difficult re-intubation with a higher mortality rate, as well [6].Factors including visiting the patient by different physicians, young age, age over 70, long-term MV, long-term use of sedatives, and hemoglobin less than 10 g/dL or hematocrit less than 30% at extubation time may increase the risk of re-intubation [5].In this study, of 15 cases who were re-intubated, 11 (73%) needed multiple re-intubations (seven, two, and two patients were re-intubated for once, twice, and tree times, respectively).All of these cases experienced hospital-acquired pneumonia, received atropine and 2-PAM for a time period up to 12 days and had a long hospital stay up to 30 days.All five patients who died belonged to this group, as well. In our study, 80% of SEs occurred in the first 24 hours after tracheal intubation and 60% of them needed re-intubation in 24 hours.Epstein, et al. [14,15] declared that all patients would need re-intubation within the first 72 hours.This may be due to the higher rate of SE in our patients which itself may be due to poor management of the doses of the sedatives or using atropine which results in agitation.Another reason may be the fact that Discussion Although exposure to OPs has significantly decreased after 1995 in the US, it is still one of the most important causes of insecticide toxicity in most countries [13].Patients with severe OP poisoning may develop respiratory failure and most of them will need tracheal intubation and MV [11].Unplanned SE and re-intubation can be followed by serious complications including aspiration, laryngeal edema, and increased risk for pneumonia [9].Many SEs result in failure within the first 72 hours [14][15][16] while re-intubation is a major determinant of the patient outcome. Unfortunately, no standard test is available to predict the appropriate time for extubation [14,15].SE occurred in nearly 50% of our patients, 60% of whom underwent re-intubation.This means a high failure rate of SE in our patients which may be due to ongoing respiratory failure because of respiratory muscle weakness secondary to OP effects.Self-extubation rate was very high (nearly 50%) in comparison to other studies.SE rate was 4 to 15% in internal medicine ICUs [16].This difference may be due to severe agitation in the OP-poisoned patients probably due to the effects of atropine.Some researchers have shown that unplanned SE may accompany with complications and a poor prognosis [17][18][19].In our study, pneumonia was the most common complication and its occurrence was not significantly different between the SE and PE groups.in the first 24 hours of admission.Gradual decrease of the sedatives could prevent self-extubation [16].In our study, of 15 patients who self-extubated, in six (40%), weaning process had been started and the dose of sedatives had been waived.The patients had regained consciousness and removed their endotracheal tube.This means that nearly 40% of SEs happened when the physician started to reduce sedatives. Duration of MV, hospital stay, and outcome Although no relation exists between re-intubation and mortality, such a statistically significant relation exists between re-intubation and later complications [20,23].Extubation failure will result in long-term hospitalization and increases mortality and later complications [14,15].SE may lead to increased duration of MV and hospital stay [7].Although in this study, there was no strong correlation between duration of MV and SE, there was a significant correlation between SE and prolonged hospital stay (14.5 VS. 8.1 days, P = 0.048).In our study, the mean time for MV and hospital stay was almost 2-fold in those who underwent re-intubation (9.3 days VS. 4.3 and 14.6 days VS. 8.2, respectively).This means that, early SE in inappropriate time can increase the duration of ICU stay in OP-poisoned patients and be a risk factor for poor outcome. Of 31 patients who underwent tracheal intubation, 23 (75%) needed MV more than 24 hours and 60% underwent re-intubation.Mann Whitney test showed that a significant relation between the number of intubations and frequency of complications (P = 0.04).Hospitalization period was significantly longer in those who were re-intubated (P = 0.04). Intermediate syndrome was the cause of respiratory failure and leading cause of re-intubation in 5 (16.1%) cases; all of these patients had been re-intubated for one to three times and all of them needed prolonged MV and a long-time administration of atropine and 2-PAM and three of them died (3 of 5 dead cases).There was not statistically significant difference in rate of IS between those who were re-intubated and those who were not (P = 0.468).The mortality rate was not different between those with SE and PE but all five dead patients belonged to the group of the patients who underwent re-intubation (P = 0.011). Conclusion Outcome is poorer in the OP-poisoned patients who self-extubate.Rate of re-intubation is also higher in these cases.Re-intubation is related to longer hospitalization period, development of airway and pulmonary complications and increased mortality.Careful respiratory monitoring and administrating enough sedatives are recommended to prevent early unplanned SE in an inapposite time. all previous studies on this subject have been performed in general ICUs while our patients are OP-poisoned with severe respiratory effects [4]. 2-PAM doses needed to be increased in those who underwent re-intubation probably because of respiratory failure secondary to intermediate syndrome.Respiratory muscle weakness due to IS was the cause of respiratory failure in four cases who received atropine and 2-PAM for 18 to 30 days.As shown, inappropriate dose of 2-PAM can lead to extubation failure and increase the need for re-intubation and prolonged MV in OP-poisoned patients.Mixed model analysis shows that there was not a statistically significant difference in atropine and 2-PAM doses between the two groups (P = 0.441 and 0.381, respectively). There is a statistically significant correlation between decreased serum AchE level and increased need for re-intubation.Mixed model analysis showed that there was a statistically significant difference in the activity of AchE between those who were re-intubated and those who did not.Table 3 shows that although serum level of AchE was not related to the need for tracheal intubation at ED, there was a statistically significant correlation between decreases in AchE level and increasing the risk of re-intubation (P = 0.047). Sedative, physical restraint & agitation Although Bambi, et al. [7] believed that SE could be prevented with non-benzodiazepines drugs, use of BZDs is strongly recommended for OP poisoning [4,20].Use of physical restrains without prescription of sedative drugs has not been recommended since it can be a risk factor for SE [7].All of our patients had physical restraint but almost 50% self-extubated.This confirms that physical restraint in the presence of inappropriate and low doses of sedatives cannot prevent SE.APACHE II score > 17, agitation, physical restraint, and higher levels of consciousness are major risk factors for SE [7].Agitated patients are at greater risk of SE [15].Therefore, the correct use of sedatives and education of the nursing staff can decrease these risks.Majority of OP-poisoned patients become alert a few hours after intubation and are at risk of SE in spite of ongoing respiratory compromise. Early deep sedation and over-sedation are associated with worse outcomes and increased hospital mortality [21].It seems that daily interruption of infusions of sedative drugs in comparison to continuous deep sedation can decrease the duration of MV and length of ICU stay [21,22].On the other hand, inadequate sedation and uncontrolled agitation are the major risk factors for SE [9].We used daily sedation interruption protocol for our patients by which nearly 50% of our patients self-extubated.Therefore, we believe that in cases with prediction of long need for MV (such as OP poisoning), sufficient sedative drugs should be prescribed specially Table 2 : Correlation between re-intubation and mean Atropine, 2-PAM, and serum AchE during hospitalization. Table 3 : Relation between serum level of AchE and reintubation.
2019-01-04T15:52:47.914Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "f9542ceae231dda019840e0674f68303df091ec3", "oa_license": "CCBY", "oa_url": "https://www.clinmedjournals.org/articles/ijaa/international-journal-of-anesthetics-and-anesthesiology-ijaa-5-075.pdf?jid=ijaa", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f9542ceae231dda019840e0674f68303df091ec3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226984891
pes2o/s2orc
v3-fos-license
Changes in prolonged sedentary behaviour across the transition to retirement Background Prolonged sedentary behaviour is associated with a higher risk of cardiometabolic diseases. This longitudinal study examined changes in daily total, prolonged (≥30 min) and highly prolonged (≥60 min) sedentary time across the transition to retirement by gender and occupational status. Methods We included 689 aging workers (mean (SD) age before retirement 63.2 (1.6) years, 85% women) from the Finnish Retirement and Aging Study (FIREA). Sedentary time was measured annually using a wrist-worn triaxial ActiGraph accelerometer before and after retirement with on average 3.4 (range 2–4) measurement points. Results Women increased daily total sedentary time by 22 min (95% CI 13 to 31), prolonged sedentary time by 34 min (95% CI 27 to 42) and highly prolonged sedentary time by 15 min (95% CI 11 to 20) in the transition to retirement, and remained at the higher level of sedentary time years after retirement. The highest increase in total and prolonged sedentary time was observed among women retiring from manual occupations. Men had more total and prolonged sedentary time compared with women before and after retirement. Although no changes in men’s sedentary time were observed during the retirement transition, there was a gradual increase of 33 min (95% CI 6 to 60) in prolonged sedentary time from pre-retirement years to post-retirement years. Conclusion The transition to retirement was accompanied by an abrupt increase in prolonged sedentary time in women but a more gradual increase in men. The retirement transition may be a suitable time period for interventions aiming to decrease sedentary behaviour. INTRODUCTION High levels of sedentary behaviour are associated with chronic diseases and mortality. 1 Moreover, accumulation of sedentary time in uninterrupted, prolonged bouts is dose-dependently associated with higher cardiovascular disease risk, 2 and especially sedentary bouts lasting ≥30 min in comparison to shorter bouts have been linked to greater all-cause mortality. 3 We have shown previously that accelerometermeasured daily total sedentary time increases in the transition to retirement, 4 but it is not known whether increased sedentary time includes changes in harmful prolonged sedentary time and how long the changes persist. It has been shown that prolonged sedentary behaviour is more prominent on workdays compared with days off, especially among office workers. 5 6 Thus, it is possible that prolonged sedentary time decreases after retirement when there is more time for activities not related to work or passive commuting. On the other hand, increased time spent at home may include passive activities such as watching television, which may in turn increase prolonged sedentary time. 7 The aim of this study was to examine changes in daily total and prolonged sedentary time across the retirement transition by following aging workers with annual accelerometer measurements Key messages What is already known about this subject? ► Accelerometer-measured prolonged sedentary time is higher on workdays compared to days off, especially among office workers. ► Sedentary time seems to increase after the transition to retirement, especially among women retiring from manual occupations. ► No previous studies have examined how prolonged sedentary time changes in the transition to retirement and how long the observed changes persist. What are the new findings? ► Retiring women increased total and prolonged sedentary time after the transition to retirement and the level was maintained about 2 years after the retirement. ► Men had notably more sedentary time compared to women before and after the transition to retirement. ► Women retiring from manual occupations increased their prolonged sedentary time more than women retiring from non-manual occupations. How might this impact on policy or clinical practice in the foreseeable future? ► Since prolonged sedentary time is associated with harmful health consequences, retirees should be encouraged to break up prolonged sitting. ► The transition to retirement could be a suitable time period for interventions to decrease sedentary time. Workplace from final years at work to a few years after the statutory retirement. Study population This study is based on the Finnish Retirement and Aging Study (FIREA) which is an ongoing longitudinal cohort study of retiring municipal workers in Finland established in 2013, 8 described previously in detail. 4 Between September 2014 and March 2020, 689 of the 908 eligible participants who had given written informed consent had successfully worn the accelerometer immediately before and after the transition to full-time statutory retirement. The rest of the participants were not yet retired (n=197), did not wear the accelerometer (n=13), or wore the accelerometer but had <4 valid measurement days either before or after transition to retirement (n=9), and were therefore excluded from the analyses. The average number of measurement points was 3.4 (range 2-4; 1.7 before and 1.7 after retirement). The mean (SD) number of valid days was 6.8 days (0.5) per participant at each wave. Accelerometer measurements Sedentary time was measured with wrist-worn triaxial Acti-Graph wActiSleep-BT and wGT3X-BT accelerometers (Acti-Graph, Pensacola, Florida, USA). Detailed measurement and data reduction procedures are described in our previous work. 4 Briefly, participants wore accelerometers on their non-dominant wrist for 7 consecutive days and nights once a year, with a mean of 361-364 days between the consecutive waves. Sleep time was excluded by the algorithm available in the ActiLife software 9 and non-wear time by the Choi algorithm. 10 Only valid days including ≥10 hours of wake wear time were included in the analyses. 4 We defined sedentary time using a cutpoint of <1853 vector magnitude counts per min, validated against a thigh-worn accelerometer among older adults in free-living conditions, 11 and defined sedentary bout as consequent minutes spent sedentary ending to a ≥1 min break spent in non-sedentary activity. 2 We calculated daily means of total sedentary time and time spent in prolonged (≥30 min) and in highly prolonged (≥60 min) sedentary bouts at each study wave before (waves −2 and −1) and after the transition to retirement (waves +1 and +2). Assessment of covariates Gender, date of birth and occupational status were obtained from the Keva register. 8 Occupational status was categorised based on the International Standard Classifications of Occupations (ISCO) 12 into non-manual (ISCO classes 1-4) and manual workers (ISCO classes 5-9) according to the last known occupation preceding retirement. Smoking status (never/former and current), body mass index (under/normal weight, overweight and obese), number of chronic diseases (0, 1 and ≥2) and mobility limitations (limitations in walking 2 km: none, minor and major 13 14 ) were derived from the questionnaires immediately before retirement (wave −1). 4 Statistical analyses The characteristics of the study population before retirement are shown as percentages for categorical variables and means and SD for continuous variables. To illustrate daily total sedentary time, prolonged and highly prolonged sedentary time by gender before and after the transition to retirement, we used linear mixed models by adjusting for wake wear time. We also compared daily sedentary time, prolonged and highly prolonged sedentary time in the transition to retirement, that is, immediately before (wave -1) and after retirement (wave +1), by gender and occupational status using linear mixed models and adjusting for confounding factors. All statistical analyses were performed using SAS statistical software, version 9.4 (SAS Institute, Inc, Cary, NC, USA). RESULTS The characteristics of the study population immediately before retirement are presented in online supplemental table 1. The mean (SD) age was 63.2 (1.6) years for the women and 63.3 (1.4) years for the men. The majority of the participants were women (85%) and non-manual workers (66%). In women, daily total sedentary time, as well as prolonged and highly prolonged sedentary time, did not change notably before retirement but increased markedly in the transition to retirement (p<0.0001) and levelled off after retirement (figure 1). In the transition to retirement, the observed increase was 22 minutes (95% CI 13 to 31) in daily total sedentary time, 34 minutes (95% CI 27 to 42) in prolonged sedentary time and 15 min (95% CI 11 to 20) in highly prolonged sedentary time (online supplemental table 2). In particular, women retiring from manual occupations increased their total and prolonged sedentary time (online supplemental table 2). Men increased daily total sedentary time, as well as prolonged and highly prolonged sedentary time, in the year preceding retirement (21 min, 95% CI 6 to 35; 23 min, 95% CI 10 to 36; Workplace 11 min, 95% CI 2 to 19, figure 1), but no statistically significant changes were observed during the transition to retirement (online supplemental table 2). An overall increase in prolonged sedentary time was observed from wave −2 to wave +2 (33 min, 95% CI 6 to 60, figure 1). Men had significantly more daily total and prolonged sedentary time compared with women at all time points. DISCUSSION This longitudinal accelerometer-based study showed that the transition to retirement induced a notable increase in prolonged sedentary time in women. In men, prolonged sedentary time increased more gradually across the retirement transition. To the best of our knowledge, this is the first longitudinal study to report changes in accelerometer-measured sedentary time from pre-retirement years to post-retirement years. Previous knowledge on long-term changes in sedentary time across the transition to retirement is based on self-reports, 8 which cannot be used to examine sedentary bouts and are subject to recall and information bias. Previous accelerometer-based findings about comparison of daily sedentary time before and after retirement 4 do not provide information on how long-lasting the observed increase in daily sedentary time is and whether the increase is induced by the transition to retirement itself or by other factors, such as aging. 15 With annual accelerometer measurements, we were able to show that the transition to retirement induced changes in sedentary behaviour in women, and especially changes in prolonged sedentary time were observed. Our results extend previous knowledge by showing that previously observed higher daily total sedentary time after retirement 4 concerns particularly prolonged sedentary time, which is more harmful for health compared with short sedentary bouts. 3 Interestingly, prolonged sedentary time did not decrease, but actually increased in the transition to retirement among those retiring from non-manual occupations, even though previous findings have shown that workdays include more prolonged sedentary time compared with days off, especially among office workers. 5 6 As a possible explanation, retirement generally brings changes to daily routines and social interactions, and the amount of active social participation after retirement may partly explain the amount of sedentary time. 16 Social connections and meaningful activities may decrease after retirement, leading to increased time spent at home and engagement in sedentary activities such as watching television, which is likely done in a more prolonged manner than other sedentary activities such as using a computer. 7 Moreover, when people retire, physical activity during commuting and lunch breaks no longer interrupt the periods of sitting . Since an increase in prolonged sedentary behaviour increases the risk of cardiovascular disease and mortality dosedependently, 2 3 retirees should be encouraged to break up sedentary activities. As men accumulated high sedentary time, they could especially benefit from interventions aiming to decrease sedentary time already in work-life. Future research on the health consequences related to increased sedentary time after retirement is needed. The strengths of our study include a longitudinal study design, accelerometer-measured sedentary time and consideration of several individual characteristics associated with sedentary behaviour. 16 The measurements were conducted at the same time of the year for each individual and therefore bias associated with seasonal variation was minimised. As a limitation, wrist-worn accelerometers may underestimate sedentary time, especially when compared with thigh-worn accelerometers. 11 We used categorisation to non-manual and manual occupations as an indicator of work-related activity and socioeconomic status, but there may be heterogeneity in terms of sedentary behaviour within the occupational groups. Our study population comprised 85% women, which corresponds to the female-dominated target population of Finnish public sector workers. 17 As there were no notable differences to the eligible study population, 4 our results can be generalised to public sector employees in Finland or to countries with a similar statutory retirement age and pension system.
2020-11-18T14:06:03.082Z
2020-11-17T00:00:00.000
{ "year": 2020, "sha1": "98d949ed8ffdeb8855d6e8a57b1522d37b598712", "oa_license": "CCBYNC", "oa_url": "https://oem.bmj.com/content/oemed/78/6/409.full.pdf", "oa_status": "HYBRID", "pdf_src": "BMJ", "pdf_hash": "98d949ed8ffdeb8855d6e8a57b1522d37b598712", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
104404698
pes2o/s2orc
v3-fos-license
Synthesis , Solvatochromic Performance , pH Sensing , Dyeing Ability , and Antimicrobial Activity of Novel Hydrazone Dyestuffs Dyeing, Printing and Auxiliaries Department, Textile Industries Research Division, National Research Centre, 33 El-Buhouth Street, Dokki, Cairo 12622, Egypt Department of Zoology, Faculty of Science, Beni-Suef University, Beni-Suef 65211, Egypt Department of Biology, College of Science, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia Pretreatment and Finishing of Cellulosic-based Fibers Department, Textile Industries Research Division, National Research Centre, 33 El-Buhouth Street, Dokki, Cairo 12622, Egypt Introduction Solvatochromic materials attract much interest due to their potential applicability as sensors in determining solvent polarity, optical light-emitting diodes, dye sensitized solar cells, and electro-and photoluminescent materials for laser purposes, in molecular electronics for the production of molecular switches, colorimetric chemosensors in detection of explosives, and volatile organic materials [1][2][3][4][5][6][7][8].Generally, a solvatochromic material can be defined as a chemical substance able to alter its color in solvents of different polarities due to a variation in its absorption or emission spectra in each solvent.ere is a variety of solvatochromic dyes that have been discovered recently, such as pyridinium, merocyanine, and stilbazolium dyes [9][10][11][12][13][14][15]. us, many researchers have prepared hydrazone-based materials as main goal molecular structures and investigated their antimicrobial activities.e widespread use of antimicrobial agents results in the growth of strongly resistant pathogens.erefore, there has been an important concern in the improvement of diverse inventive collection of pharmacological agents [27][28][29][30][31][32][33]. ese observations emphasize the necessity for the preparation of novel hydrazones that gain diverse biological activities.Arylhydrazones are characterized by their simple preparation and worth considering in the design of novel molecular switches with solvatochromic and pH sensing performance.A hydrazone functional group has the ability to operate as a bridge in a donor-acceptor molecular structure, or it can act itself as an electron donor moiety, when it is in conjugation with an electronwithdrawing moiety [34][35][36][37][38][39]. Herein, we present the synthesis, characterization, photophysical properties, dyeing behavior, and antimicrobial assessment of novel tricyanofuran-hydrazone dyes 1-3, in which a number of electron-withdrawing and electrondonating substituents were introduced at the ortho-, meta-, and para-positions of the aromatic moiety of the diazonium chloride.e molecular structures of the novel hydrazone dyes are presented in Figure 1. e molecular switching character of the hydrazone dye was also explored. Experimental 2.1.Materials and Methods.Melting points were measured uncorrected in degree Celsius using Stuart SMP30.FT-IR spectra were investigated by Fourier-transform infrared spectrophotometer (Nexus 670, Nicolet, United States) in the range of 400-4000 cm −1 with a spectral resolution of 4.0 cm −1 .Mass spectra were reported on a Shimadzu GCMS-QP 1000 EX mass spectrometer at 70 eV.Elemental analyses (C, H, and N) were studied using PerkinElmer 2400 analyzer (Norwalk, United States).UV-visible absorption spectra were determined at ambient conditions using UNICAM UV-visible 300.Nuclear magnetic resonance (NMR) spectra were measured on a BRUKER AVANCE 400 spectrometer at 400 MHz; chemical shift was given in ppm relatively to an internal standard TMS at 295 °K. e pH values were reported using a BECKMAN COULTER pHI 340 meter with a combined glass-calomel electrode. Solvents were purchased from Aldrich and Fluka for both of the preparation procedures of dyes and spectroscopic studies (spectroscopic grade).All reactions were observed using Merck aluminum thin layer chromatography plates precoated with silica gel PF254 (20 × 20 mm, 0.25 mm) and monitored by naked eye under an ultraviolet lamp (254 or 365 nm). e fabric was scoured according to the literature procedure [40,41]. Dyeing Procedure. e dyeing process was applied to the polyester fabric using the high-temperature dyeing technique according to the literature procedure [4].e dyeing process was carried out using the high-temperature high-pressure coloration technique according to the literature procedure [4].A dispersion of the dyestuff was prepared by dissolving the proper amount of dye (2 wt.% relative to the weight of fabric) in 1 mL DMF and then added gradually with stirring to the dye bath (liquor ratio 50 : 1) in presence of sodium lignin sulfonate as a dispersing agent (2 wt.% relative to the weight of fabric).e pH of the dye bath was customized to 4.85 using an aqueous solution of acetic acid, and the wetted-out polyester fabric was then added.e dye-bath temperature was maintained at 130 °C for 180 minutes under pressure in an infrared dyeing apparatus.e fabric was then washed and subjected to the reduction clearing process at 80 °C for 45 minutes in an aqueous bath (1 L) containing sodium hydroxide (2 g) and sodium hydrosulphite (2 g), followed by soaping using nonionic detergent (2%).e fabric was then rinsed in cold water and neutralized by an aqueous solution of acetic acid (1 g/L) for 5 minutes at 40 °C, followed by rinsing in tap water and allowed to be air-dried.e determination of the dye uptake into polyester fibers was measured applying the absorption process depending on the Beer-Lambert law. is was evaluated by the testing sample from dye bath at different periods of time (15,40,55, 75, 90, 105, 120, 135, 150, 160, 170, and 180 minutes) while running the dyeing process, according to the previously reported techniques [4,7]. Color Strength Measurement. e color strength (K/S) of the dyed polyester samples was assessed by applying the high reflectance technique using the Kobelka-Munk equation where S is the scattering coefficient, K is the absorption coefficient, and (R, R o ) are decimal fractions of the reflectance of both dyed and undyed polyester samples, respectively [7]. Synthesis of Hydrazone Dyestuffs. e highly electrondeficient tricyanofuran moiety was prepared according to a previously described literature procedure by applying Knoevenagel condensation between 3-methylacetoin and malononitrile in presence of absolute ethanol as a solvent and sodium ethoxide as a strong base [26].Knoevenagel condensation reaction is an interesting reaction for the synthesis of electron-deficient substances, such as the highly electron-deficient oxygen-containing tricyanofuran heterocycle.e strongly electron-withdrawing CN groups on the tricyanofuran moiety are useful for the stabilization of the tricyanofuran carbanion obtained by proton abstraction from the active methyl substituent in presence on sodium acetate as a weak base [2].e tricyanofuran carbanion was then subjected to azo coupling by the appropriate aryldiazonium salt to produce the consequent unstable azo dyes that were converted directly into the stable hydrazone dyes 1-3 as shown in Scheme 1. e molecular structures of prepared hydrazone dyes were verified according to their spectroscopic data.FT-IR spectra displayed absorption peaks at 3287, 3248, and 3386 cm −1 which are assigned to the hydrazone NH group, whilst the peaks at 1586, 1575, and 1577 cm −1 are due to C�N stretch of the hydrazone functional group for dyes 1, 2, and 3, respectively. e 1 H-NMR spectra showed singlet peaks at 12.31, 12.82, and 10.02 ppm which are assigned to the proton of the hydrazone NH group of dyes 1, 2, and 3, respectively.e 1 H-NMR spectra of dyes 1-3 displayed also a singlet peak around 8.59, 8.31, and 7.29 ppm, respectively, Journal of Chemistry due to the aliphatic vinyl proton (�C-H). e downfield shifting of such singlet signal is a result of the strong impact of the electron-withdrawing tricyanofuran moiety. Solvatochromic Measurements. e UV-visible absorption bands were monitored in the wavelengths ranges 455-540, 475-497, and 475-485 nm for dyes 1, 2, and 3, respectively (Table 1 and Figure 2).ey exhibited colors between yellow and purple in various pure solvents of different polarities.e type of substituents on the arylhydrazone moiety was found to affect on the UV-visible absorption maximum band.A distinctive solvatochromic behavior of all dyestuffs in both of protic and aprotic solvents was monitored.Protic environment was anticipated to exhibit partial protonation to the hydrazone dyestuffs via hydrogen bonding of the OH proton of the protic solvent to the lone pair of electrons on the hydrazone NH functional group [2]. is leads to reduced charge on the hydrazone NH group to result in the hypsochromic shift.In both of nonpolar and aprotic environment, nonetheless, this phenomenon was negligible and we had to change our thought to other contributions for salvation, such as the dipolar nature of solvents.Solvatochromism monitored in our hydrazone dyes arises from changes in the contribution degree of the lone pair of electrons on the nitrogen atom of the NH functional group, which can partially function as a bridge among partially electron-rich hydrazone donor and highly electron-deficient tricyanofuran acceptor fragment in a semi-donor-acceptor molecular system. is can simply be described as an extended inductive effect as demonstrated in Scheme 2. is results in an interesting positive solvatochromic behavior owing to the generated partial extended conjugation [4].erefore, the polarizability due to the donor or acceptor substituents on the arylhydrazone moiety certainly influences the solvatochromic performance of the prepared hydrazone dyes. Assessment of pH Sensing Effect. e electronwithdrawing functional group on arylhydrazone generates an acidic NH hydrazone proton able to create a conjugate base, or so called an arylhydrazone anion, with an electrondonating ability. is arylhydrazone anion acts as an electron-donating moiety in conjugation with the strong electron-deficient tricyanofuran moiety leading to an interesting spectral switch of a donor-acceptor molecular system under pH stimulus [24]. Figure 3 displays the UVvisible absorption spectra and color changes of dye 1 dissolved in acetonitrile (ca.2.3 × 10 −5 mol•L −1 ), under the deprotonation and protonation reversible process.Upon addition of 1.0 mol•L −1 of methanolic solution of tetrabutylammonium hydroxide (TBAH) to compound 1 dissolved in acetonitrile to raise the pH value, the maximum absorption wavelength at 451 nm was bathochromically shifted to 538 nm.On the contrary, the addition of 1.0 mol•L −1 methanolic solution of trifluoroacetic acid (TFAA) resulted in the maximum absorption band at 451 nm to reappear, while reducing the pH value. e existence of an isosbestic point at 489 nm verifies that there are two different molecular species: hydrazone and hydrazone anion that coexist together in equilibrium with 4 Journal of Chemistry each others and that the spectral changes are attributed to the acidity of the hydrazone NH.Considering this hydrazone NH prevented resonance among arylhydrazone and tricyanofuran, this proposes that a negatively charged arylhydrazone anion was formed in alkaline environment leading to a considerable increment of the internal charge transfer that can be described as an extended resonance effect (Scheme 3). To inspect the reversibility and stability of such pH sensing changes, methanolic solutions (1.0 mol•L −1 ) of TBAH and TFAA were employed to exchange the pH among ∼6.62 and ∼6.93.e ratios of UV-visible absorption values at 451 nm (∼0.8327) and at 538 nm (∼0.6650) of compound 1 were recorded, and the results are displayed in Figure 4.It was obvious that this procedure was highly reversible demonstrating that compound 1 was stable at different pH values. Evaluation of Dyes Uptake. Polyester fabrics are described by their high affinity toward disperse colorants.e prepared hydrazone colorants are characterized by their nonionic small molecular structures with low solubility in the aqueous medium.us, they can be described as disperse dyestuffs with the capability to be applied on hydrophobic fabrics, such as polyester.e prepared dyes 1-3 were applied Journal of Chemistry by the high-temperature and -pressure approach at 130 °C on polyester samples at 2% shade to afford dyed polyester substrates from yellow, orange, and orange-red.All dyes demonstrated excellent uptake into polyester substrates as displayed in Figure 5. is fact is an indication of their high penetrations through the polyester chains.In addition, the high dye uptake can be attributed to the fabric affinity as a result of the small and planar molecular structures of the prepared hydrazone colorants.us, the dyed polyester fabrics showed excellent colorfastness to washing and rubbing.e shades, color strength, and colorfastness properties of dyestuffs 1-3 are displayed in Table 2. e colorfastness against light of the prepared dyes 1-3 was found to rely on the substituent type on the hydrazone moiety.ese substituents are able to change the electron density over the entire molecular structure of the dye.us, all dyes displayed good 6 Journal of Chemistry colorfastness to light except for dye 1, which contains highly electron-withdrawing nitrosubstituents.On the contrary, dye 1 showed a better color strength than dyes 2 and 3, which is in agreement with electron-withdrawing nitrosubstituents with the ability to increase the depth of color on dye 1. e shades of the dyed polyester substrates were in accordance with the monitored absorption maximum wavelength of the prepared dyes in solution. Antimicrobial Activity. e produced dyestuffs 1-3 were independently examined against Escherichia coli, Staphylococcus aureus, and Candida albicans, using the plate agar counting as the standard procedure.e antimicrobial reduction percentages stimulated by the prepared dyes are summarized in Table 3. Dyestuff 1, comprising higher electron-withdrawing substituents, demonstrated low inhibition effect on the reduction percentages due to poor antimicrobial resistance, while dyestuffs 2 and 3 with electron-donating substituents showed moderate antimicrobial resistance. Conclusions Some novel tricyanofuran hydrazone derivatives were prepared and applied on polyester fibers as disperse dyes with a variety of substituents at ortho-, meta-, and parapositions of the arylhydrazone moiety.ey were prepared using a simple approach via azo coupling of the tricyanofuran starting material with the appropriate diazonium Journal of Chemistry chloride.e molecular structures of dyes were verified by 1 H-NMR and 13 C-NMR spectra, elemental analysis, and FT-IR spectra.A positive solvatochromism was recorded by UV-visible absorption spectra in a variety of solvents with different polarities.e pH molecular switching effect, accompanied by reversible color changes, was monitored by UV-visible spectra as a result of the variation of the pH value leading to the production of charge delocalization on the hydrazone dye resulting in extended conjugation through a quinoid form. is stimulated planarity of hydrazone dye resulted from the electronwithdrawing substituents on the arylhydrazone moiety producing higher conjugation extent of the hydrazone anion dye than that of the hydrazone dye.us, the pH sensory is displayed via modulating the intramolecular charge transfer changed by deprotonation/protonation processing. e prepared dyestuffs were applied on polyester substrates by the high-temperature and -pressure technique to give good depths of shades from yellow, orange, and orange-red. e studied dyestuffs displayed mostly satisfactory colorfastness properties.We also explored the application of the hydrazone dyes as potential substances with antimicrobial efficiency against some pathogenic species.Dyes comprising electron-donating groups on the arylhydrazone moiety demonstrated moderate antimicrobial activities, while dyes with highly electron-withdrawing groups on the arylhydrazone displayed weaker antimicrobial activities.
2019-04-10T13:13:07.814Z
2019-02-05T00:00:00.000
{ "year": 2019, "sha1": "20203b0436c2d10a135c06c4f09dbab0553835b3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/7814179", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "20203b0436c2d10a135c06c4f09dbab0553835b3", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
210164294
pes2o/s2orc
v3-fos-license
Probing core overshooting using asteroseismology Modeling properly the interface between convective cores and radiative interiors is one the most challenging and important open questions in modern stellar physics. The rapid development of asteroseismology, with the advent of space missions partly dedicated to this discipline, has provided new constraints to progress on this issue. We here give an overview of the information that can be obtained from pressure modes, gravity modes and mixed modes. We also review some of the most recent constraints obtained from space-based asteroseismology on the nature and the amount of mixing beyond convective cores. Introduction At the occasion of the workshop "How much do we trust stellar models?" organized in Liège to celebrate the 75 th birthday of Arlette Noels, I was asked to review the recent results obtained with asteroseismology to better understand the interface between convective cores and radiative interiors. This topic is both one of the most pressing open questions for stellar physics and a subject that is dear to Arlette's heart. The impact on stellar physics is clear. The mixed region associated to the convective core plays the role of a reservoir for nuclear reactions and knowing its extent is crucial to accurately model stellar evolution, in particular to estimate stellar ages. Over the last decade, the advent of spatial asteroseismology has yielded precious constraints on the size of the mixed core for stars of various masses and stages of evolution. The interpretation of these seismic data has greatly benefitted from the work of Arlette Noels and her collaborators in Liège on the physical processes responsible for the extension of convective cores (overshooting, semiconvection) and their asteroseismic signature (see, e.g., Noels et al. 2010). Among the processes that can extend convective cores, overshooting is the most often cited. Formally, the limit of the convective core is set by the Schwarzschild criterion and it corresponds to the layer above which upward-moving convective blobs start to be braked. However, this criterion does not take into account the inertia of the ascending blobs, which can in fact overshoot over a certain distance inside the stable region. This is expected to extend the size of the mixed core. Despite the large number of studies dedicated to this phenomenon, the details of how it operates remain very uncertain. Three physical quantities need to be determined in order to properly model core overshooting: 1. The distance d ov over which chemical elements are mixed beyond the formal limit of the convective core. Theoretical studies wildly disagree on the value of d ov , with predictions ranging from 0 to several units of local pressure scale height H P (e.g., Saslaw & Schwarzschild 1965, Shaviv & Salpeter 1971, Roxburgh 1978, Zahn 1991). 2. The nature of the extra-mixing beyond the convective core. Overshooting can be modeled either as an instantaneous mixing (all chemical elements being homogeneous in the overshooting region), or as a diffusive process where the turbulent velocities are generally assumed to decay exponentially in the overshoot region (Herwig 2000). 3. The temperature stratification in the extra-mixing region. According to the Schwarzschild criterion, the temperature gradient should correspond to the radiative gradient (∇ = ∇ rad ) in the overshoot region. However, the convective blobs that penetrate inside the stable regions could heat these layers and bring the temperature gradient closer to the adiabatic gradient ∇ ad . The latter case is usually referred to as penetrative convection and by opposition, the case of an inefficient penetration that does not alter the temperature gradient (∇ = ∇ rad in the extra-mixed region) is referred to as non-penetrative convection 1 . The situation is even more complicated because other poorly-understood processes can also extend the size of convective cores, such as rotation-induced mixing (e.g. Maeder 2009) or semiconvection (e.g. Langer et al. 1985). The combined effects of all these phenomena are generally modeled in stellar evolution codes by a simple extension of the mixed core over a distance considered as a free parameter. This distance is often referred to as the overshooting distance and denoted as d ov , even though one should keep in mind that the extension of the core may in fact be caused by several distinct processes, not only core overshooting. We also use this terminology in this review. The details of how the core extension is implemented vary from one evolution code to another. The codes assuming an instantaneous mixing in the overshoot region usually take d ov as a fraction α ov of the pressure scale height H P at the core boundary. Core overshooting can also be implemented as a diffusive process and in this case the diffusion coefficient is generally taken as where r s is the radius of the Schwarzschild boundary, D conv is the MLT diffusion coefficient some distance below r s , and f ov is an adjustable parameter controlling the distance of overshooting. The temperature gradient is chosen to be either ∇ ad (penetrative convection) or ∇ rad (non-penetrative convection). Another important aspect is the treatment of the extension of "small" convective cores, for stars with masses around 1.2 M . The pressure scale height diverges in the center, so for small cores, the classical implementation described above generates unrealistically large core extensions that can reach the size of the core itself. Here again, evolution codes have different ways of remedying this problem. For instance, Cesam2k defines the overshooting distance as d ov = α ov min(H P , r s ). By default, MESA adopts the definition d ov = α ov min(H P , r s /α MLT ) where α MLT is the mixing length parameter. Considering these definitions, small convective cores can have extensions over distances that vary by a factor α MLT for the same value of α ov in the two codes. Other studies chose to impose a linear dependence of α ov on stellar mass in this mass range (e.g., Pietrinferni et al. 2004, Bressan et al. 2012. One should be aware of these differences, which prevent direct comparisons of overshooting efficiencies between codes that adopt different prescriptions. The diversity of these implementations is due to the current lack of observations that could help constrain the physical properties of the extra-mixing beyond convective cores. So far, constraints on core overshooting were obtained mainly from the modeling of eclipsing binaries (e.g. Stancliffe et al. 2015, Claret & Torres 2018 and from the color-magnitude diagrams of clusters (e.g., Maeder & Mermilliod 1981, VandenBerg et al. 2006. These observational data are essentially sensitive to the distance of the extra-mixing d ov . Asteroseismology can directly probe the size of the mixed core at current age because oscillation modes are sensitive to the sharp gradient of chemical composition at this location. As will be shown in the following sections, seismic constraints can also be obtained on the chemical profile within the region of extra-mixing and on the temperature stratification, which opens the interesting prospect of testing more complex models of core overshooting. In this review, we present a selection of the most recent seismic constraints on the physical properties of the boundaries of convective cores. Our aim is not to be exhaustive, but to give an overview of the latest developments made possible thanks to space-based asteroseismology. For this purpose, we focus on four types of stars. We start on the main sequence with results obtained for solar-like pulsators using pressure modes (Sect. 2) and for slowly pulsating B (SPB) stars using gravity modes (Sect 3). We then show that mixed modes can also place strong constraints on core overshooting in subgiants (Sect. 4) and core-helium burning giants (Sect. 5). 2 Constraints from main sequence solar-like pulsators 2.1 What constraints can we expect from pressure modes? Figure 1: Variations in the ratio r 01 as a function of frequency for 1.15-M main sequence models with α ov = 0 (green), 0.1 (red), 0.15 (cyan), 0.2 (blue). The vertical dashed lines indicate the frequency interval where solar-like oscillations are expected to be excited. From Deheuvels et al. (2015). Pressure modes are sensitive to the region of extra-mixing beyond the convective core through its effect on the sound speed velocity c s . Assuming an ideal gas law, c 2 s = Γ 1 RT /µ, where Γ 1 is the adiabatic exponent, T is the temperature, and µ is the mean molecular weight. At the boundary of the mixed core, a strong µ-gradient develops, which creates a near discontinuity in the sound speed velocity. This generates an acoustic glitch for pressure modes (the spatial scale of the variations in c s is smaller than the mode wavelength), which produces a clear signature in the frequencies of these modes. It is well known that acoustic glitches generate a periodic modulation of the mode frequencies (Gough 1990). The amplitude of the modulation depends on the intensity of the glitch (sharpness of the µ-gradient) and the period depends on the location of the glitch (the deeper the boundary of the mixed core, the longer the period). In principle, pressure modes thus convey information about the size of the mixed core and the nature of the mixing in the overshoot region. Note that acoustic glitches are also produced by the bottom of the convective envelope (e.g., Christensen-Dalsgaard et al. 2011) and the zone of ionization of helium (e.g., Mazumdar et al. 2014, Verma et al. 2014. Although the periodic modulation due to the acoustic glitch is present in the mode frequencies themselves, it is more convenient to use combinations of mode frequencies instead. Most studies use small differences d 01 or second differences dd 01 built with radial and dipolar modes It has indeed been shown that these quantities are particularly sensitive to the structure of the core (e.g., Provost et al. 2005). Besides, the ratios r 01 defined as dd 01 /∆ν 1 , where ∆ν 1 corresponds to the large separation of dipolar modes (∆ν 1,n = ν 1,n − ν 1,n−1 ), have been shown to be largely insensitive to the structure of the outer layers, which makes them almost immune to the well-known near-surface effects (Roxburgh & Vorontsov 2003). As an illustration, Fig. 1 shows the variations in the ratio r 01 with frequency for 1.15 M main sequence models. Different extensions of the convective core were considered, ranging from α ov = 0 to α ov = 0.2, and models were evolved until the same age. For α ov > 0.1, the models have a convective core at the current age and the periodic modulation caused by the edge of the core is clearly visible. It is also evident that for larger core sizes (when α ov increases), the period of the oscillation decreases. Fig. 1 also shows the approximate range of frequencies where p-modes are expected have detectable amplitudes. It appears that this interval is much shorter than the period of the modulation, which unfortunately prevents us from getting model-independent measurements of the size of the mixed core. However, the behavior of r 01 in the range of observed modes changes significantly as α ov is varied, showing that the extent of the convective core can be determined using model-dependent analyses. In particular, the coefficients of a linear regression of r 01 (ν) have been shown to efficiently constrain the amount of extra mixing (Deheuvels et al. 2010b, Silva Aguirre et al. 2011. Several analyses of this type have been recently obtained. Some recent results HD49933: HD49933 is an F5-type main sequence star and was the first solar-like pulsator to be observed with the CoRoT satellite. It benefitted from 180 days of nearly continuous observations and the properties of its oscillation modes were determined by Benomar et al. (2009). The identification of the degree of the detected modes initially caused problems, an ambiguity arising between the l = 1 rotationally split modes and the overlapping l = 0 and l = 2 modes. This problem is now known to occur for all F-type pulsators owing to their large mode width, and several methods have been proposed to remedy this issue (e.g., Bedding & Kjeldsen 2010). The mode identification for HD49933 is now robust, and Goupil et al. (2011) performed a modeling of the star. They found that HD49933 has a stellar mass of in the range 1.05-1.18 M and an age in the range 2.9-3.9 Gyr. They showed that to reproduce the behavior of the observed small differences d 01 , an extension of the convective core over a distance d ov ≈ 0.2H P needs to be invoked. They also calculated models of the star including microscopic diffusion and rotationally-induced mixing using the code CESTAM (Marques et al. 2013). They found that these models fail to reproduce the slope of d 01 (ν) and that some amount of core overshoot needs to be included to produce a good agreement with the seismic data. KIC12009504 (Dushera): The Kepler satellite has provided us with nearly four years of continuous observations during the nominal mission. An early analysis of the Kepler main sequence target KIC12009504 (dubbed Dushera) already permitted to find evidence that the star has a convective core and to place constraints on its extent (Silva Aguirre et al. 2013). The authors modeled the star using nine months of Kepler data analyzed by Appourchaux et al. (2012). They found that the star has a stellar mass of 1.15 ± 0.04 M , a radius of 1.39 ± 0.01 R and an age of 3.80 ± 0.37 Gyr. They also showed that the observed ratios r 01 could be reproduced only by models with a convective core that extends beyond the Schwarzschild boundary (see Fig. 2). Optimal fits were obtained when the limit of the mixed core is located at an acoustic radius equal to ∼ 2.4% of the total acoustic radius 2 . Dependence of the amount of overshoot with stellar mass: As illustrated by the examples presented here, several asteroseismic studies were led on individual stars, which all reported the 2 The acoustic radius is defined as τ ≡ r 0 dr/c s . It corresponds to the wave travel time from the center to a radius r. need for extended convective cores. It is important now to have access to consistent studies of larger samples of stars in order to better understand how the efficiency of the extra-mixing beyond convective cores depends on global stellar properties. Deheuvels et al. (2016) modeled 24 Kepler solar-like pulsators in a consistent way, using the coefficients of a 2 nd -order polynomial fit to the ratios r 01 to probe the mixed core. Within this sample, 10 stars were found to be already on the post-main-sequence. Among the other targets, the authors detected a convective core in eight stars and they were able to estimate the size of their mixed core, finding a good agreement with the two evolution codes Cesam2k and MESA (using identical prescriptions for core overshooting). It was necessary to include significant extensions of the mixed core in all the considered targets. The optimal values of α ov obtained for these eight stars are shown as a function of stellar mass in Fig. 3. As can be seen in this figure, there seems to be a tendency of core overshooting to increase with stellar mass in the considered mass range, although more data points will be required to confirm this trend. Interestingly an increase of the efficiency of core overshooting with mass was also found using constraints from double-lined eclipsing binaries by Claret & Torres (2018), although this result is currently debated (Constantino & Baraffe 2018). One should also beware that the stars studied by Deheuvels et al. (2016) are in the range of mass where the radius of the convective core is smaller than the pressure scale height at the core edge during most of the main sequence evolution. The efficiency of the extra-mixing beyond the convective core parameterized by α ov thus depends on the treatment that they adopted for "small" convective cores (d ov redefined as α ov r s when H P > r s in this study). Constraints from main sequence g-mode pulsators Gravity modes are expected to be excellent probes of the region of extra-mixing beyond the convective core, through their dependence on the Brunt-Väisälä frequency N (see Sect. 3.1). The CoRoT and Kepler missions have produced exquisite photometric data for g-mode classical pulsators, in particular slowly pulsating B (SPB) stars and γ Doradus stars, thus providing information about core properties for stars of intermediate masses. What constraints can we expect from gravity modes? High-order gravity modes (in the asymptotic regime) are expected to be equally spaced in period. The asymptotic period spacing of g modes of degree l is approximately given by where L 2 = l(l + 1) and the radii r i and r o are the inner and outer turning points of the g-mode cavity. The Brunt-Väisälä frequency directly depends on the temperature stratification and the µ-gradient in the g-mode cavity through the relation where ∇ µ ≡ (d ln µ/d ln P ), δ = (∂ ln ρ/∂ ln T ) P,µ , and ϕ = (∂ ln ρ/∂ ln µ) P,T . As stars evolve, the hydrogen content in the convective core decreases and a region of increasingly large µ-gradient develops above the boundary of the core (see Fig. 4). This generates a buoyancy glitch at the outer edge of the µ-gradient region, where ∇ µ varies on a length scale that is shorter than the mode wavelength. This glitch produces a periodic modulation of ∆Π l , whose period depends on the location r µ of the glitch within the cavity (the deeper the glitch, the longer the period, as can be seen in the right panel of Fig. 4). The amplitude of the modulation depends on the intensity of the glitch, i.e., on the smoothness of the chemical profile outside the convective core. For stars massive enough for the CNO cycle to dominate during their main-sequence evolution, the convective core recedes, which increases the size of the µ-gradient region. As a result, the outer edge of the µ-gradient region moves outwards and the period of the modulation decreases, as can be seen in the right panel of Fig. 4. The characteristics of this periodic modulation give direct constraints on the properties of the extra-mixing beyond the convective core. As shown by Miglio et al. (2008), adding core overshooting to stellar models changes the size of the µ-gradient region and thus modifies the period of the modulation (see Fig. 4). The nature of the mixing in the overshoot region can also be tested. When core overshooting is treated as a diffusive process, ∇ µ varies more smoothly than when an instantaneous mixing is assumed and the glitch produced in the Brunt-Väisälä frequency is less steep (see left panel of Fig. 5). This makes a difference for g-modes with higher periods. These modes have shorter wavelengths, which eventually become smaller than the length scale of the sharp feature in ∇ µ as mode period increases. Thus, higher-period g modes do not "feel" this feature as a glitch and we expect the amplitude of the periodic modulation in ∆Π l to decrease as mode period increases. The situation is different for an actual discontinuity in the Brunt-Väisälä frequency, for which all the modes have wavelengths longer than the length scale of the glitch. To test this quantitatively, Pedersen et al. (2018) calculated a reference model of 3.25 M with diffusive overshooting and tried to see if its seismic content could be reproduced by models computed with an instantaneous overshooting. For this purpose, they generated a grid of models with instantaneous mixing in the overshoot region, with varying masses, initial hydrogen abundances, central hydrogen contents, and overshooting efficiencies. They showed that no model of the grid was able to reproduce the period spacings of the reference model computed with diffusive overshoot (see Fig. 6). This shows that for this type of star, one should be able to distinguish between an instantaneous and a diffusive overshoot. This is no longer true for more evolved models nearing the end of the main sequence (Pedersen et al. 2018). In principle, information could also be obtained about the temperature stratification in the overshooting region. Indeed, with penetrative convection (∇ = ∇ ad ), the Brunt-Väisälä frequency vanishes in the overshooting region, whereas with non-penetrative overshooting (∇ = ∇ rad ), it remains strictly positive. The inner turning point r i of the g-mode cavity is therefore located deeper in the latter case. This should have an impact on the buoyancy radius of the sharp µ-gradient, defined as Π µ = rµ r i N r dr −1 , and thus on the period of the oscillatory behavior of ∆Π l , which corresponds to the ratio between the buoyancy radius of the glitch and the total buoyancy radius of the cavity. This remains to be theoretically addressed. To use the information conveyed by γ Doradus and SPB stars about the core properties, one difficulty arises: these stars are usually fast rotators and the effects of rotation need to be taken into account to properly identify and interpret the periodic modulation caused by the µ-gradient region. This issue has been extensively studied and goes beyond the scope of the present review. However, we can mention that the validity of the so-called traditional approximation of rotation (TAR, Eckart 1960) 3 has been shown (Ballot et al. 2012). This has made it possible to successfully identify the modes and analyze the oscillation spectra of fast-rotating γ Doradus and SPB stars (Bouabid et al. 2013). Some recent results HD50230: This star is a hybrid pulsator, oscillating both as an SPB star (gravity modes) and a β Cephei star (pressure modes) orbserved with CoRoT. It is also a slow rotator, which simplifies the interpretation of its oscillation spectrum. In the g-mode region of the spectrum, a group of eight modes with nearly constant period spacing was found by Degroote et al. (2010). The period spacings of these modes show a periodic modulation that the authors attributed to the edge of the mixed more. The authors found that the period of this modulation can only be accounted for with extra-mixing beyond the convective core over a distance of at least 0.2 H P . Interestingly, the amplitude of the modulation seems to decrease with increasing period, which the authors interpreted as an evidence for a smooth gradient of chemical composition at the boundary of the mixed core. KIC10526294: KIC10526294 is an SPB star observed with Kepler. A series of 19 dipolar gravity modes with consecutive radial orders were detected by Pápics et al. (2014) for this star, making it a particularly interesting target to search for periodic modulation induced by the convective core. Rotational splittings could be measured for the star, which indicated that it is a very-slow rotator (average rotation period of ∼ 188 days). The period spacings ∆P of the detected modes exhibit a clear deviation from the asymptotic period spacing. Moravveji et al. (2015) performed a detailed modeling of this target. They showed that the variations of ∆P with mode period are better reproduced with core overshooting implemented as a diffusive process than with an instantaneous mixing in the overshoot region. They found optimal values of the overshoot parameters of f ov between 0.017 and 0.018 (see Eq. 1). They also claim that including an extra-mixing in the radiative interior outside the overshooting region can significantly improve the agreement between the models and the observations. It should however be remarked that the optimal models are still far from giving a good statistical agreement with the Kepler observations (see Fig. 7, left panel). This suggests that the models might be missing some important ingredient. KIC7760680: This star is a moderately-rotating SPB star observed with the Kepler satellite. It exhibits a series of 36 consecutive gravity modes, in which a clear periodic modulation can be detected (see Fig. 7, right panel). It is also apparent that the period spacings of KIC7760680 show an almost linear decrease with mode period. This is the clear signature of moderate rotation for prograde modes (Bouabid et al. 2013). Moravveji et al. (2016) modeled the star, considering different assumptions for the mixing within the overshooting region. They considered a solid-body rotation for the star and for each model, they optimized the rotation rate to reproduce the slope of the period spacings as a function of the mode period. As was the case for HD50230 and KIC10526294, they found that a diffusive overshoot reproduces the periodic modulation in the period spacings better than an instantaneous overshoot. With both implementations, the optimal models include a sizable overshooting region (f ov = 0.024 ± 0.001 in the case of a diffusive overshoot and α ov ∼ 0.32 for an instantaneous overshoot). Here again, the optimal solutions are quite far from the observations, yielding reduced χ 2 of the order of 2000. The bottom right panel of Fig. 7 shows that there is clear structure in the residuals (periodic modulation for mode periods larger than ∼1.25 days). This shows that the period of the modulation in ∆P differs between the models and the observations, especially for large mode periods. This is likely indicating that improvements could be made in the modeling of the chemical composition profile in the overshooting region. γ Doradus stars: Recently, long series of consecutive g modes were also revealed in the spectra of γ Doradus stars (Van Reeth et al. 2016, Christophe et al. 2018). These stars are generally moderate to fast rotators. However, once the signature of rotation has been correctly identified, an oscillatory behavior of the period spacings has been reported for some γ Doradus stars (Christophe et al. 2018). These stars could therefore also provide precious information on the properties of the extended mixed cores in the near future. Subgiants When stars evolve past the end of the main sequence, their inner layers contract as hydrogen starts burning in a shell. This causes the frequencies of gravity modes to increase owing to the increasing Brunt-Väisälä frequency in the core. In the meantime, the envelope expends as stars become subgiants. The mean density of the star decreases and therefore the frequencies of pressure modes also decrease. As a result, the frequencies of the lowest radial order g modes become of the same order of magnitude as the frequencies of the p modes that are stochastically excited in the outer part of the convective envelope. At this point, non-radial modes develop a mixed nature, behaving as g modes in the core and as p modes in the envelope. This phenomenon arises because of the coupling exerted between the two cavities by the evanescent zone that separates them. Mixed modes have a large potential because they convey information about the core properties while having detectable amplitudes at the surface. What constraints can we expect from mixed modes? The helium core of subgiants is radiative because hardly produces any luminosity. So even if the star had a convective core during the main sequence, convective mixing has ceased when the star becomes a subgiant. Nevertheless, the main sequence convective core leaves an imprint in the chemical composition profile of young subgiants. Since mixed modes are sensitive to the Brunt-Väisälä profile, and thus to the profile of µ, they can bring indirect information about the extent of the core and the nature of the mixing at its edge. The oscillation spectra of young subgiants contain only a few g-dominated modes, i.e., modes that are trapped mainly in the g-mode cavity. However, in subgiants, the coupling between the p-and g-mode cavities is strong for dipolar modes, and the frequencies of p-dominated modes are significantly affected by this coupling (Deheuvels & Michel 2010). Mixed modes convey information about the core properties through two channels: • The frequencies of g-dominated modes. As is apparent from Eq. 4, they depend essentially on the integral r 2 r 1 N/r dr, where r 1 and r 2 are the inner and outer turing points of the g-mode cavity. Fig. 8 shows the Brunt-Väisälä profile of a 1.3 M model in the subgiant phase. In the outer part of the g-mode cavity (below r 2 ), the Brunt-Väisälä frequency is dominated by the contribution of the µ-gradient (N 2 µ = gϕ∇ µ /H P , red solid line), whose shape depends on the extent of the main sequence convective core. • The intensity of the coupling between the p-and g-mode cavities. The coupling essentially depends on the Brunt-Väisälä profile in the evanescent zone (r 2 r r 3 in Fig. 8). It thus conveys information about the µ-gradient above r 2 , as can be seen in Fig. 8. The intensity of the coupling can be estimated observationally by observing its effect on the pdominated modes. For low coupling intensities, their frequencies will hardly deviate from the asymptotic frequencies of p modes, whereas if the coupling is strong, large deviations are expected. Recent results The star HD49385 was observed with the CoRoT satellite during 137 days and its oscillation spectrum was analyzed by Deheuvels et al. (2010a). Fig. 9 shows the variations in the large separation ∆ν 1 of dipolar modes as a function of mode frequency. At low frequency, ∆ν 1 strongly deviates from the roughly constant value that is expected from asymptotic developments. It was later established that this was caused by the presence of a g-dominated mixed mode in the lower-frequency part of the spectrum, which coupled to the detected p modes and altered their mode frequencies (Deheuvels & Michel 2010). Deheuvels & Michel (2011) proposed a new optimization technique adapted to the modeling of stars with mixed modes, which they applied to HD49385. They found that the star has a mass of 1.25 ± 0.05 M and an age of 5.0 ± 0.3 Gyr. For their modeling, the authors considered models with an instantaneous overshooting over an adjustable distance d ov . They found two different families of solutions: one with a small amount of overshooting (α ov < 0.05) and the other with a moderate amount of overshooting (α ov = 0.19 ± 0.01). The models from the latter family provide the closest agreement with the observations and the large separation of their l = 1 modes are shown in Fig. 9. Deheuvels & Michel (2011) showed that this bimodality of the solutions is due to the strong dependence of the mode coupling to the stellar mass (the higher the mass, the lower the coupling). Only models with masses around 1.25 M are able to produce the correct coupling and thus reproduce the observed frequencies of l = 1 modes. The optimal mass was found to vary non-linearly with the amount of overshooting. Only low (α ov < 0.05) or moderate (α ov = 0.19 ± 0.01) values of overshooting correspond to a stellar mass of about 1.25 M . Models with α ov ∼ 0.1 have higher masses and thus a mode coupling that is too weak (see blue dashed curve in Fig. 9). Models with α ov > 0.2 have lower masses and thus a coupling that is too strong. Mixed modes can thus give measurements of the size of main sequence convective cores using a diagnostic that is completely independent from the one used for main sequence solarlike pulsators (Sect. 2). Several tens of subgiants have been observed with Kepler and could also provide constraints on the size of main sequence convective cores. The study of these targets is under way. Core-He burning giants Giant stars with masses M 0.7M eventually start burning helium in their core. This happens either quietly in a non-degenerate core (for stars with masses M 2M ) or in a flash for stars with masses M 2M , whose core is degenerate when it reaches the temperature at which He starts burning. In both cases, the star then develops a convective core. Measuring the extent of the mixed core at this evolutionary stage can bring complementary information about the interface between convective and radiative regions. We start by briefly introducing the challenges posed by the modeling of convective cores in core-helium burning (CHeB) stars (Sect. 5.1) and we then present the constraints derived from asteroseismology (5.2). Modeling the convective core of CHeB giants The modeling of mixing in the core of low-and intermediate-mass stars during the CHeB phase is notoriously challenging. Depending on the criterion that is adopted for convective stability, evolutionary codes predict very different values for the size of the He-burning convective core, and thus also for the duration of the CHeB phase (see Fig. 10). The situation is more complicated than during the main sequence because C and O, which accumulate as He is burnt in the core, are more opaque than He. As a result, the radiative gradient increases in the convective core, and a discontinuity of the radiative gradient tends to develop at the boundary of the convective core. We here briefly describe some of the choices made to treat this in evolutionary codes and refer the interested reader to the review by Salaris & Cassisi 2017 for more details. In what is usually referred to as the bare Schwarzschild (BS) model, the Schwarzschild criterion is applied on the radiative side of the convective boundary (panel (a) of Fig. 11). Since the radiative gradient is hardly modified over time on the radiative side, the core size remains roughly constant during the whole CHeB phase (see black curve in Fig. 10, labeled as the "no ivershooting" case). Meanwhile, the radiative gradient increases in the core, and the quantity ∇ rad − ∇ ad thus increases on the convective side of the core boundary. As established by Schwarzschild (1958) and reminded by Castellani et al. (1971b) and Gabriel et al. (2014), this situation is in fact unphysical because the convective velocities are expected to vanish at the edge of the convective core. As a result, the total flux should be equal to the radiative Figure 10: Size of the mixed core during the CHeB phase with different modelings for the boundary of the convective core. Figure from Constantino et al. (2015). flux at this layer, and one should have ∇ rad = ∇ ≈ ∇ ad there. The BS model is therefore an incorrect implementation of the Schwarzschild criterion. Another way of understanding the inadequacy of the BS model is to realize that it is unstable to any mixing beyond the core boundary. Indeed, let us assume a mild extra-mixing, such that the first layer above the convective core is mixed with the convective core. In this layer, the abundance in carbon and oxygen increases, the opacity increases and hence the radiative gradient increases above the adiabatic gradient. The layer then becomes definitively convective. At the next time step, the layer above the enlarged convective core will in turn become convective. This process stops only when ∇ rad = ∇ ad on the convective side of the core boundary. Panel (b) of Fig. 11 thus shows the correct implementation of the Schwarzschild criterion. In practice, this is implemented in evolution codes by including a small amount of core overshooting (the extension of the convective core that it produces is sometimes referred to as induced overshooting) or by checking at each time step whether the layers above the convective core would become convective if they were mixed with the core, and by adding these layers to the core if it is the case. However, a complication occurs when the mass fraction of helium in the core drops below ∼ 0.7. Then, a minimum appears in the profile of the radiative gradient in the core, as can be seen in panel (a) of Fig. 12. For low and intermediate amounts of overshooting, the outward mixing brings fresh helium into the core and thus induces a decrease of ∇ rad in the whole convective core (panel (b) of Fig. 12). The minimum of ∇ rad eventually drops below ∇ ad and the convective core is split in two convective regions separated by an intermediate radiative zone. The outer convective region rapidly vanishes because of the decrease in ∇ rad . The convective core is thus comprised only of the inner convective region and it has shrunk. As helium is burnt in the core, the abundance of carbon and oxygen increases again, ∇ rad increases and eventually has again a minimum within the core. We are then brought back to panel (a) of Fig. 12 and the situation reiterates. As a result, the boundary of the convective core goes back and forth (see the case labeled as standard overshooting in Fig. 10), leaving behind step-like features in the helium abundance profile. In this case, the behavior of the convective core is in fact independent of the amount of core overshooting that is included (this is no longer true for large amounts of overshooting as explained below). The treatment of the intermediate radiative region that appears in the vicinity of the minimum of ∇ rad has been the subject of several studies. It is generally thought that it undergoes a partial mixing that enforces convective neutrality (∇ rad = ∇ ad ) in this zone (Castellani et al. 1971a, Castellani et al. 1985, as can be seen in panel (c) of Fig. 12. The partially mixed region shares similar features with a semi-convective layer, and this mechanism has been referred to Figure 11: Schematic behavior of the temperature gradient near the boundary of the convective core with ∇ rad = ∇ ad imposed on the radiative side (a) and on the convective side (b) of the boundary (from Castellani et al. 1971b). Figure 12: Schematic behavior of ∇ rad after it has reached a minimum in the convective core. Panel a (reps. b) shows the evolution with standard overshooting and an increasing (resp. decreasing) radiative gradient. Panel c: evolution with semiconvection. Figure from Castellani et al. (1971a). as induced semi-convection. Modeling this intermediate region as a semi-convective layer produces core sizes that are very similar to those obtained with overshooting (see how the cyan and orange curves nearly overlap in Fig. 10), but without the back-and-forth motion of the core boundary, and therefore with a smoother chemical composition profile. It was also found that when applying large amounts of core overshooting at the boundary of the convective core, the extra-mixed region becomes large enough to prevent the formation of a semi-convective region (Bressan et al. 1986, Bossini et al. 2015. In this case, the size of the mixed core depends on the amount of core overshooting that is imposed. Asymptotic period spacings of g modes in CHeB giants Thanks to the space missions CoRoT and Kepler, mixed modes have now been detected in tens of thousands of red giants. The frequencies of these modes can be identified using their asymptotic expression, which was first developed by Shibahashi (1979). By fitting this analytic expression to the observed mode frequencies, one can obtain estimates of various global seismic characteristics of the star, including the asymptotic period spacing ∆Π 1 of its dipolar gravity modes (see Eq. 4). The fitting procedure is challenging because of the large number of modes and it is made much more complicated by the splitting of mixed modes due to rotation. Mosser et al. (2015) have proposed a convenient method, based on the calculation of corrected mode periods (called stretched periods), which made it possible to perform an automatic fitting of red giants. Using this method, Vrard et al. (2016) were able to measure the asymptotic period spacing ∆Π 1 of 6100 Kepler red giants. This database constitutes an unprecedented opportunity to probe the core of red giants. Bedding et al. (2011) showed that the period spacing ∆Π 1 can be used to reliably distinguish CHeB giants from H-shell burning giants, which are ascending the red giant branch (RGB). The reason for this is evident from Eq. 4. In contrast with RGB stars, CHeB giants have a convective core. Their g-mode cavity is therefore smaller and they have larger values of ∆Π 1 . For CHeB giants, Montalbán et al. (2013) showed that there is a nearly linear relation between the size of the convective core and the asymptotic period spacing ∆Π 1 . Indeed, if the convective core expends, the g-mode cavity becomes smaller and ∆Π 1 increases. The Kepler data thus have a great potential to measure the size of the mixed core in CHeB stars. The asymptotic period spacings can also convey information about the temperature stratification. Indeed, in the case of penetrative convection, we have ∇ = ∇ ad and thus N 2 = 0 in the extra-mixed region. As a result, gravity waves do not propagate in the overshoot region. On the contrary, with non-penetrative overshooting, N 2 = N 2 T > 0 and the overshoot region is part of the g-mode cavity. We thus expect models with non-penetrative convection to have smaller values of ∆Π 1 than models computed with penetrative convection. For models with semi-convection above the convective core, N 2 = N 2 µ > 0 in the partially mixed region and ∆Π 1 is also expected to be smaller than with penetrative overshooting. As described in Sect. 3.1, sharp variations in the Brunt-Väisälä frequency (buoyancy glitches) induce periodic modulations in the period spacings of g modes. Such features could be measured from the frequencies of mixed modes and give strong constraints on the chemical composition profile near the core boundary. We come back to this in more detail in Sect. 5.2.3. Constantino et al. (2015) and Bossini et al. (2015) both led studies to compare the observed distribution of period spacings of CHeB giants to the distributions that would be predicted with different mixing schemes beyond the convective core. They found generally consistent results. Seismic constraints on the convective core of CHeB giants The "bare Schwarzschild" models have the smallest convective cores because the (incorrect) implementation of the Schwarzschild criterion on the radiative side prevents the core from growing. The highest period spacings ∆Π 1 predicted by these models are around 250 s (see black symbols in Fig. 13), well below the maximum observed period spacings, which are around 340 s. Bossini et al. (2015) reach the same conclusion. This confirms that the convective cores of the bare Schwarzschild models are much too small. Models that include low amounts of core overshooting or semi-convection also have period spacings that appear to be too small compared to the observations (cyan and orange symbols in Fig. 13). This means that their convective cores are too small. We already mentioned in Sect. 5.1 that models computed with semi-convection and models computed with low overshooting have very similar core sizes (Fig. 10). Yet Fig. 13 shows that the latter models have larger period spacings. According to Constantino et al. (2015), this is justified by the fact that large µ-gradients develop in models computed with overshooting, owing to the back-and-forth motion of the core boundary. This is enough to create efficient mode trapping inside the partially mixed region. As a result, the observed period spacing corresponds to the asymptotic expression of Eq. 4 calculated excluding the region of µ-gradient. It is thus larger than for models computed with low overshooting than for models computed with semi-convection, for which the chemical composition profile is smooth and such mode trapping does not occur. The Kepler data clearly point in favor of an extended mixed core, larger than the one produced with semi-convection or standard amounts of overshooting. To reproduce the seismic data, Bossini et al. (2015) calculated models with high amounts of overshooting. They found that models with non-penetrative convection over a distance of 1 H P or with penetrative convection over a distance of 0.5 H P could roughly reproduce the distribution of the observed period spacings. They gave their preference to the latter models because they also match the luminosity of the asymptotic-giant-branch (AGB) bump, which can be measured from Kepler data. Constantino et al. (2015) calculated models with a modified implementation of core overshooting. They prevented at all time the splitting of the convective core that occurs because of the minimum in ∇ rad . This model, which they refer to as maximal overshooting has no physical justification but aims at building convective core with maximal sizes. The authors found that these models produce period spacings that are consistent with the bulk of the low-mass observations (see magenta line in Fig. 13). Additional information was recently obtained from the measurement of period spacings in the CHeB giants of the two old open clusters NGC 6791 and NGC 6819 (Bossini et al. 2017). Fig. 14 shows the location in the ∆ν-∆Π 1 plane of the CHeB-members of these two clusters. The authors calculated models with the same physical properties as the CHeB giants of both clusters and using different mixing schemes at the core boundary. They found that models computed with a moderate amount of overshooting can reproduce the range of observed period spacings. Interestingly, the models computed with penetrative convection (adiabatic stratification in the extra-mixed region) predict too large period spacings for the stars at the beginning of the CHeB phase in NGC 6819, which led the authors to favor the non-penetrative convection scenario. Naturally more evidence is required to be more conclusive. Constraints from buoyancy glitches Further constraints could also be obtained in the near future by detecting the signature of buoyancy glitches in the period spacings of g modes in CHeB giants. The mixing schemes presented in Sect. 5.1 predict very different abundance profiles in the region above the fully mixed core. For instance, models computed with standard overshooting show step-like features in the helium abundance above the core, while models computed with semi-convection have smooth helium profiles. Sharp variations of µ are expected to be felt as buoyancy glitches by g modes, which should produce an oscillatory component in the period spacing, as was described in Sect. 3.1. The occurrence of buoyancy glitches in the cores of red giants and their seismic signature in the period spacing of g modes has been extensively addressed by Cunha et al. (2015) using stellar models. Detecting these modulations in ∆Π 1 is more complicated for CHeB giants than for main sequence g-mode pulsators because of the mixed character of the modes. Nonetheless, the method of Mosser et al. (2015) can be used to recover the period spacings of pure gravity modes and thus reveal potential periodic modulations produced by glitches (see Fig. 15). Glitches produced by sharp µ-gradients above the mixed core are located deep within the g-mode cavity and are thus expected to produce long-period modulations. A systematic search for such features in the oscillation spectra of CHeB giants observed with Kepler should bring strong constraints on the way chemical elements are mixed above the convective core. Conclusion The advent of space asteroseismology has yielded numerous novel constraints on the properties of convective cores for stars with various masses and evolutionary stages. We started this review by mentioning that three physical quantities needed to be known to progress in our modeling of the boundary of convective cores. We conclude by summarizing the recent findings of asteroseismology for each of them: 1. Distance over which mixed cores are extended: We here presented only a small selection of all the seismic studies that provided constraints on the extent of the mixed core. The great majority of them concluded that an extension of the mixed core beyond the Schwarzschild limit needed to be invoked. These studies also showed that large star-to-star variations exist for the distance of the extra-mixing. Nevertheless, tendencies can be found in the available data. Main sequence intermediate-mass stars seem to require extensions of the order of 0.2-0.3 H P . For lower-mass stars (1.1 M/M 1.5), lower extensions are needed (from 0.05 to 0.2 min(H P , r s ), where r s is the formal boundary of the convective core). In this mass range, a potential increase of the distance of extra-mixing with stellar mass has been reported but needs to be confirmed. In this review, we have focused on low-and intermediate-mass stars, which have so far benefitted more from space-based asteroseismology, but seismic constraints have also been obtained on the core properties of massive stars. The seismic analyses of β Cephei pulsators (8 to 20 M ), essentially with ground-based observations, have shown quite large variations in the extent of the extra-mixed region from one star to another, typically ranging from 0 to 0.3 H P (e.g., Dupret et al. 2004, Ausseloos et al. 2004, Aerts et al. 2011, Briquet et al. 2012. Finally, it has been found that the convective core of core-helium-burning stars needs to be extended over even larger distances, likely in the range of 0.5-1 H P . 2. Nature of the mixing in the core extension: Seismology is currently the only tool to test how efficient the mixing of chemicals is beyond the edge of the convective core. Gravity modes, through their sensitivity to the gradient of µ, are particularly well suited for this purpose. The seismic study of three SPB stars has consistently shown that a diffusive overshooting modeled with an exponentially decaying diffusion coefficient yields better agreement with seismic observations than an instantaneous mixing in the overshoot region. Other constraints on the nature of the mixing could be brought in the near future by using mixed modes in subgiants. Temperature stratification in the region of extra-mixing: Measuring this quantity is particularly difficult. However, having penetrative (∇ = ∇ ad ) or non-penetrative (∇ = ∇ rad ) convection changes the propagation of gravity modes in the overshoot region. This modifies the period spacing of g modes. Hints in favor of non-penetrative convection were obtained from the core-helium burning giants of an old open cluster. Further constraints could be obtained from SPB and γ Doradus stars. We here note that constraints have been obtained on the temperature stratification at the bottom of the envelope convection of the Sun. Christensen-Dalsgaard et al. (2011) found evidence for a smooth transition from ∇ = ∇ ad to ∇ = ∇ rad in the overshoot region. We note that in this review, we have focused exclusively on results obtained with the forward modeling approach. Seismic inversions also have a large potential to bring information on the properties of convective cores. Recent studies have shown promising results for solar-like pulsators (Bellinger et al. 2017, Buldgen et al. 2018) and new, model-independent constraints could come from such analyses in the near future. The number of targets for which the edge of the mixed core could be seismically probed is increasing rapidly. We are starting to build large enough samples so that trends can be searched in the properties of the extra-mixed region as a function of global stellar parameters. On the short term, this can help us calibrate more refined models of convective core extensions in evolutionary codes. This could provide us with more reliable stellar ages, which is crucial for disciplines that require high-precision stellar modeling, such as the characterization of exoplanets, with the upcoming PLATO mission, or galactic archaeology. Even more challenging will be the task of disentangling the contributions from the different physical processes to the extensions of convective cores. So far a pragmatic approach has generally been adopted, whereby the effects of all these processes are modeled together in a parametric way. To establish the contribution of rotational mixing, it would be very interesting to search for correlations between the amount of mixing beyond convective cores and the rotational properties of stars. Stars for which seismology can provide measurements of the size of the mixed core and the internal rotation profile would be particularly useful. Magnetic fields are also expected to play a role by inhibiting rotational mixing through the damping of differential rotation in radiative interiors. For instance, this might be happening in the β Cephei pulsator V2052 Ophiuci, which hosts a fossil magnetic field with B pol ∼ 400 G. Through a seismic modeling of the star, Briquet et al. (2012) found that it indeed has an unexpectedly low amount of extra-mixing beyond the convective core. More studies of this type are needed to progress in our understanding of the processes that can extend the size of mixed cores. In this context, the TESS and PLATO missions are particularly welcome. They will provide us with seismic data with a nearly all-sky coverage, which will greatly increase the number of targets for which seismic constraints on the core properties can be derived. In particular, with PLATO data, we will be able to perform much more meaningful statistical studies of the extent of the mixed core in solar-like pulsators.
2020-01-13T12:10:17.000Z
2019-12-01T00:00:00.000
{ "year": 2020, "sha1": "f7ced0ec5c275b7f8a679727ab9e1585bef19211", "oa_license": null, "oa_url": "https://popups.uliege.be/0037-9565/index.php?file=1&id=9269", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "f7ced0ec5c275b7f8a679727ab9e1585bef19211", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
14722445
pes2o/s2orc
v3-fos-license
Mammalian SEPT9 isoforms direct microtubule-dependent arrangements of septin core heteromers Cell type–specific alternative splicing results in six confirmed mammalian SEPT9 isoforms. SEPT9 expression levels dictate the hexamer-to-octamer ratio of septin core heteromers, and isoform compositions and expression levels together determine higher-order arrangements of septin filaments. Interpretations: Recall that K562 cells express three SEPT9 isoforms -two large (a and b that only differ by 0.72 kDa) and one small (isoform f) -each of which can be predicted to cap the septin core hexamer at either end to form an octamer ( Figure 4). Analysis of the septin heteromer pool of cell lines in which the three endogenous SEPT9 isoforms are replaced by a single native isoform (Results, Figure 4C, Blue native PAGE) demonstrates that SEPT9 expression directs the octamer to hexamer heteromer ratio. Figure S1A shows the cognate experiment using AcGFP-tagged isoforms (described in Results, Figure 5), which demonstrates that AcGFP-SEPT9 derivative likewise generates the expected increase in the octamer to hexamer ratio. When native SEPT9 isoforms are expressed ( Figure 4C), the octamers generated upon expression of the SEPT9(a) or the SEPT9(f) isoforms have the same mobility as the endogenous octamer complex i and iii, respectively, which implies that complex i can be expected to correspond to octamers capped with either of the large SEPT9 isoforms (a or b), and complex iii to octamers capped with isoforms f at both ends. The intermediary sized complex ii was thus tentatively assigned as octamers with isoform f at one end and a or b at the other. The corresponding analysis of AcGFP-tagged octamers ( Figure S1A) shows that each of the expressed AcGFP-SEPT9 isoform generates uniformly sized octamers. In the case of AcGFP-SEPT9(a) and AcGFP-SEPT9(f), the presence of the AcGFP is identifiable by a shift in mobility as compared to complex i and iii, respectively. The molecular mass is considered as the main determinant for separation by Blue Native PAGE, but the shape of protein complexes (i.e. Stoke radius) is also of significance (Wittig et al., 2006). Septin heteromers are rodshaped and their Stroke radius -as determined by gel filtration (Sellin et al., 2011b) -are notably large relative to the mass. Based on a comparison with commonly used markers (ferritin, Stoke radius 6.10 nm/450 kDa; thyroglobulin, Stoke radius 8.5 nm/670 kDa), Blue Native PAGE analysis predicts a hexamer mass of ~500 kDa, which is a deviation that can be attributed to the elongated shape of the 282 kDa hexamers. The structure of the variable N-terminal extension of SEPT9 isoforms is unknown and the impact on octamer shape cannot be predicted. Figure S1C shows a plot of molecular masses (listed in Figure S1B) versus the mobility of the hexameric heteromer and octamers capped by different SEPT9 isoform derivatives upon separation by Blue native PAGE ( Figure S1C). It is evident from these data that, although the predicted masses and mobility of the native and AcGFP tagged octamers show a log-linear relationship (see dashed and dotted lines Figure S1C), the N-terminal extensions affect octamer mobility much more than could be excepted by their increase in mass. For example, the difference in mass between SEPT9(a) and SEPT9(f) is approximately the same as an AcGFP-fusion partner (27 kDa), i.e. the mass of octamers capped with either SEPT9(a) () or AcGFP-SEPT9(f) () is the same (~413 kDa), but the presence of AcGFP affects the mobility much less than the N-terminal extension of SEPT9(a). Relative to the G-domain, SEPT9 isoform a, e, and f have approximately a 295, 131 and 44 residues N-terminal region, respectively, and it is notable that even the N-terminal extension of the SEPT9(e) isoform has a pronounced effect on the mobility of the cognate octamers. Hence, this comparison of native and AcGFP-tagged octamers demonstrates that the N-terminal extension of SEPT9 has properties that facilitate separation of distinct subsets of octameric septin heteromers by the Blue Native PAGE technique. As outlined above, absolute molecular mass estimates based on the Blue Native PAGE technique can be misleading. Nevertheless, the log-linear mobility correlation of octamers capped with native SEPT9 isoform a, e, and f suggests a correlation with mass among the SEPT9 isoforms. The assignment of complex ii as a putative 385-386 kDa complexes composed of the hexamer capped with isoform f at one end and a or b at the other (, Figure S1B) is further supported by its mobility correlation shown in Figure S1C. FIGURE S2: Fluorescence intensities of individual cells in which the endogenous SEPT9 isoforms are replaced with AcGFP-tagged versions of the indicated SEPT9 isoform. Upper Panels: Cell lines described in Figure 5 were analyzed by flow cytometry. The distribution of background (Vector-Co) and AcGFP-fluorescence among live cells harboring shRNA SEPT9 and the indicated SEPT9 isoform reporter is shown. The mean fluorescence intensity of cells is indicated in each panel. More than 97% of all cells were included in the acquisition gate and 5000 cells were analyzed. Lower Panels: Cell lines described in Figure 5 were stained with propidium iodide followed by analysis of DNA content by flow cytometry. The mitotic index of cells is indicated in each panel. The data shown in upper and lower panels were reproduced in three independent transfection experiments. Analysis by flow cytometry is described in (Holmfeldt et al., 2003). Interpretations: The distribution of AcGFP-fluorescence intensity among individual cells in Figure S2 shows that AcGFP-SEPT9 expression varies within the cell population and that a significant fraction contains comparably low levels of the fluorescent reporter. Recall that these cells essentially lack endogenous SEPT9 ( Figure 5B), which implies that the present results, combined with data in Figure S1A, predict a subpopulation of cells in which the ratio of octameric to hexameric heteromers is low. This subpopulation is identifiable by a comparably weak fluorescence ( Figure 5C). The DNA-profiles shown in Figure S1, lower panels, suggest that the present manipulations of SEPT9 isoform expression do not significantly interfere with cell growth. Moreover, we did not note any increase of the mitotic index (see inserts in Figure S2, lower panels). It is notable that depletion of the native septin heteromer pool by means of shRNA SEPT7 expression, or depletion of octamers by means of shRNA SEPT9 expression, does not cause a detectable mitotic phenotype in cell lines of hematopoietic origin, such as K562 and Jurkat cells (Sellin et al., 2011a). This is contrast to adhesion-substrate dependent cell lines such as HeLa cells and embryonic mouse fibroblasts, but the reported defects during cell divisions are still relatively mild and only detectable in a subpopulation of cells (Estey et al., 2010;Fuchtbauer et al., 2011). There are presently no clues concerning cell type-specific differences with respect to the function of the septin system. Even so, it is still notable that deletion of septin genes in unicellular fungi like Saccharomyces cerevisiae and Schizosaccharomyces pombe results in different phenotypes and that the phenotype is subtle in the latter fungi (Oh and Bi, 2011). To construct SEPT9 isoform reporters with AcGFP fused to their N-terminus, a PCR generated fragment of AcGFP was inserted into the HindIII site introduced by the isoform-specific forward primers. The template and primers used to create the AcGFP-fragment with HindIII sites at both ends were as follows: Template: pAcGFP1-N1 (Cat. No. 632469, Clontech Laboratories, Mountain View, CA). 5'-primer: 5'-GGACCCAAGCTTCTATTCACCATGGTGAGCAAGGGCGCCGAG 3'-primer: 5'-GGGTCCAAGCTTCTTGTACAGCTCATCCATGCC The coding sequences of all PCR-generated fragments were confirmed by nucleotide sequence analysis.
2016-05-14T09:59:39.754Z
2012-11-01T00:00:00.000
{ "year": 2012, "sha1": "c72d956ed3a6f73ee9e00901c8d3a8e5e803b25e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1091/mbc.e12-06-0486", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c72d956ed3a6f73ee9e00901c8d3a8e5e803b25e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15866699
pes2o/s2orc
v3-fos-license
A novel film–pore–surface diffusion model to explain the enhanced enzyme adsorption of corn stover pretreated by ultrafine grinding Background Ultrafine grinding is an environmentally friendly pretreatment that can alter the degree of polymerization, the porosity and the specific surface area of lignocellulosic biomass and can, thus, enhance cellulose hydrolysis. Enzyme adsorption onto the substrate is a prerequisite for the enzymatic hydrolysis process. Therefore, it is necessary to investigate the enzyme adsorption properties of corn stover pretreated by ultrafine grinding. Results The ultrafine grinding pretreatment was executed on corn stover. The results showed that ultrafine grinding pretreatment can significantly decrease particle size [from 218.50 μm of sieve-based grinding corn stover (SGCS) to 17.45 μm of ultrafine grinding corn stover (UGCS)] and increase the specific surface area (SSA), pore volume (PV) and surface composition (SSA: from 1.71 m2/g of SGCS to 2.63 m2/g of UGCS, PV: from 0.009 cm3/g of SGCS to 0.024 m3/g of UGCS, cellulose surface area: from 168.69 m2/g of SGCS to 290.76 m2/g of UGCS, lignin surface area: from 91.46 m2/g of SGCS to 106.70 m2/g of UGCS). The structure and surface composition changes induced by ultrafine grinding increase the enzyme adsorption capacity from 2.83 mg/g substrate of SGCS to 5.61 mg/g substrate of UGCS. A film–pore–surface diffusion model was developed to simultaneously predict the enzyme adsorption kinetics of both the SGCS and UGCS. Satisfactory predictions could be made with the model based on high R2 and low RMSE values (R2 = 0.95 and RMSE = 0.16 mg/g for the UGCS, R2 = 0.93 and RMSE = 0.09 mg/g for the SGCS). The model was further employed to analyze the rate-limiting steps in the enzyme adsorption process. Although both the external-film and internal-pore mass transfer are important for enzyme adsorption on the SGCS and UGCS, the UGCS has a lower internal-pore resistance compared to the SGCS. Conclusions Ultrafine grinding pretreatment can enhance the enzyme adsorption onto corn stover by altering structure and surface composition. The film–pore–surface diffusion model successfully captures features on enzyme adsorption on ultrafine grinding pretreated corn stover. These findings identify wherein the probable rate-limiting factors for the enzyme adsorption reside and could, therefore, provide a basis for enhanced cellulose hydrolysis processes. conversion of corn stover has attracted the interest of scientists around the world [2,3]. For the conversion of lignocellulosic biomass to bioethanol, the key bottleneck is the initial conversion of biomass to sugars. It is well known that lignocellulosic biomass, in its native form, is recalcitrant to hydrolysis with cellulase enzyme systems in the biochemical conversion process. To overcome biomass recalcitrance and improve cellulose accessibility, many chemical pretreatment methods (acid [4], alkali [5], ammonia fiber explosion [2] and so on [6]) were employed. However, these chemical pretreatment methods generate highly toxic effluents and cause negative impacts on the environment. Mechanical comminution is an environmentally friendly pretreatment that can alter the degree of polymerization, crystallinity degree, porosity and specific surface area of lignocellulosic biomass and, thus, enhance cellulose hydrolysis [7]. Most previous studies on the mechanical comminution pretreatment of lignocellulosic biomass were usually carried out by chipping (10-30 mm), grinding and milling (0.2-2 mm) [7][8][9]. Recently, ultrafine grinding (approximately 25 μm) technology, which can achieve a small particle size, large specific surface area, and high chemical activity [10], was also sporadically explored in the field of lignocellulose pretreatment. For example, Silva et al. investigated the effects of grinding processes on the enzymatic degradation of wheat straw [11]. The results showed that the ultrafine grinding pretreatment significantly enhanced enzymatic hydrolysis yield up to 10-fold as compared with coarsely grinding. Although some properties, such as the particle size and cellulose crystallinity, had been characterized to explain the hydrolysis mechanism after the ultrafine grinding pretreatment, some intrinsic properties, such as the adsorption kinetics, should be further investigated. Cellulase adsorption onto the substrate via the binding domain is a prerequisite step for the enzymatic hydrolysis process and directly affects the enzymatic hydrolysis yield of lignocellulosic biomass [12,13]. Thus, an adequate description of the adsorption step is indispensable for understanding and optimizing hydrolysis reaction, especially for that after the ultrafine grinding pretreatment. It is well known that ultrafine grinding increases the available specific surface area/pore volume [14] and, thus, improves the exposure level of the cellulose-binding domain, which is closely related to the cellulase adsorption kinetics. However, the cellulase adsorption kinetics of lignocellulosic biomass after the ultrafine grinding pretreatment has never been reported until now. Previous experimental and modeling studies on the cellulase adsorption of lignocellulosic biomass mainly focused on those pretreated by chemical methods, such as acid [15], hydrothermal [13], organosolv [13], and SO 2 -catalyzed steam explosion [16]. These studies commonly characterized cellulase adsorption by the Langmuir isotherm model, which describes the relationship between the amount of enzyme protein binding with substrate and the amount of enzyme protein free in solution after attaining equilibrium adsorption [17]. The Langmuir isotherm model can evaluate the maximum adsorption capacity of the substrate under different enzyme loadings, but it is not capable of expressing the adsorption kinetics of cellulase along with the adsorption time. The adsorption kinetics can be used to better understand the rate-controlling step of the mass transfer involved in the adsorption process. From a mechanistic viewpoint, the adsorption of cellulase onto lignocellulosic biomass can include three consecutive steps: the external diffusion of cellulase from bulk solution across the liquid film surrounding the solid biomass particles, internal diffusion of cellulase through the biomass particles by pore volume diffusion and surface diffusion, and the adsorption of cellulose molecules onto the biomass particles at the active sites ( Fig. 1). This study first investigated the enzyme adsorption kinetics of ultrafine grinding pretreated corn stover. Then, a film-pore-surface diffusion model was developed to explain the enzyme adsorption kinetics. The rate-limiting steps in the adsorption process were further investigated. To our knowledge, this is the first work in the literature to reveal the enzyme adsorption behavior of corn stover pretreated by ultrafine grinding, thus identifying wherein the probable rate-limiting factors for the enzyme adsorption reside and could, therefore, provide a basis for enhanced cellulose hydrolysis processes. Carbohydrates and lignin content The cellulose, hemicellulose, and lignin content of the sieve-based grinding corn stover (SGCS) and ultrafine grinding corn stover (UGCS) is listed in Table 1 [19]. It was also shown that there are no significant differences in the carbohydrates and lignin content of both substrates. Previous studies have noted that the chemical components have important adsorption interactions with enzyme molecules, although the enzyme adsorption of lignin is considered a nonproductive one [15]. Even if the two substrates present the similar contents, the grinding may affect the surface composition and, thus, change the adsorption capacity/affinity of the enzyme for the substrate. This is further corroborated by surface composition measurement for the two substrates. The surface areas of cellulose and lignin, which are two dominant components in the cellulase adsorption [20,21], were measured by determining the maximum adsorption capacity of the dyes Congo Red [22] and Azure B [23] on the substrates, respectively ( Table 1). The cellulose surface area of the UGCS (290.76 m 2 /g) was almost twofold higher than that of SGCS (168.69 m 2 /g). Compared with the lignin surface area of the SGCS (91.46 m 2 /g), that of the UGCS (106.7 m 2 /g) also moderately increased. These results indicated that the substrate pretreated by ultrafine grinding can induce more exposure of the surface composition (especially for cellulose), which will be favorable to enzyme adsorption. Figure 2 shows the particle size distributions of both the SGCS and UGCS. The particle size distribution was characterized by the median diameter (d 50 ) and the span defined by (d 90 -d 10 )/d 50 , where d 10 , d 50 and d 90 represent the 10th, 50th and 90th percentiles of the total volume, respectively [11]. The median sizes (d 50 ) of the UGCS and SGCS were 17.45 μm and 218.50 μm, respectively. The Particle size distribution of the UGCS and SGCS. These data were determined by a laser diffraction particle size analyzer spans of the UGCS and SGCS were 2.72 and 2.93, respectively. The smaller span value indicated a more uniform size distribution. Severe vibration ball milling under the ultrafine grinding condition destroyed the fiber structure and, thus, achieved significant particle size reduction and unified particle size distribution. The ultrafine grinding of crop residues was also reported by several studies. For example, Silva et al. investigated the median particle sizes and particle size distribution spans of wheat straw under the operating conditions of ball milling and jet milling [11]. Ball milling reduced the particle size from 270 to 16 μm over a 0-240 h period. The span first increased to more than 5 during the first 120 h and then decreased to 2.5 at the end of the 120 h. Jet milling reduced the median particle size of wheat straw from 107 to 22 μm and was much more rapid (85 min) than ball milling. A previous study by our team also explored the ultrafine grinding of wheat straw by 8 h of vibration ball milling and reported ultrafine wheat straw powder with a median size of 17.0 μm and a span of 4.0 [24]. Compared with previous studies, our study produced ultrafine powder of corn stover in a shorter time (30 min), which indicated less energy consumption. Specific surface area (SSA) and pore volume (PV) distribution The SSA and PV distribution of the SGCS and UGCS is listed in Table 1 and Fig. 3. The SSA of the UGCS was approximately 1.5-fold higher than that of the SGCS (Table 1). Although the values between the SSA and the surface composition areas were uncomparable due to different measured methods [22], their similar increased trends for the UGCS indicated that the ultrafine grinding pretreatment significantly affects substrate structure and surface composition. The PV of the UGCS was approximately threefold higher than that of the SGCS (Fig. 3a). The UGCS had a wider pore volume distribution (2-300 nm) than the SGCS (2-50 nm) based on differential curves of the pore volume distribution (Fig. 3b), which indicated that mesopores and macropores existed in the UGCS. The SSA and PV properties are important parameters for the conversion of lignocellulosic biomass to biofuels and are often useful to ascertain whether the comminution pretreatment technology is useful or not [25]. Commonly, the comminution pretreatment can enhance the SSA of lignocellulosic biomass. This is because drastic milling to the straw can destroy the structure of the lignocellulose, disorganizing the tightly ordered fibers and exposing more enzyme bonding sites [26,27]. Piccolo et al. found that the SSA increased by more than 60 % after ball milling compared to untreated wheat straw samples [28]. Furthermore, the SSA is highly sensitive to the particle size of lignocellulosic biomass. Zhang et al. reported a linear correlation of the SSA with particle size for pan-milling cellulose powder [29]. The SSA is not only related to the particle size, but is also strongly related to the PV of the lignocellulosic biomass. The surface area of the substrate can be divided into an interior surface area, reflected by the biomass porosity, and an exterior surface area, largely determined by the particle size. Compared with the sieve-based grinding pretreatment, the ultrafine grinding pretreatment can produce more significant changes in the internalpore structure, and these changes are mainly responsible for the enzymatic adsorption and hydrolysis of biomass [30,31]. The size of a cellulase is approximately 5.1 nm [32] and, hence, only those pores larger than 5.1 nm are accessible to enzyme. The pore accessible to enzyme is correlated with the enzyme diffusion resistance and adsorption rate [32,33]. Compared with the SGCS, the UGCS has a higher volume fraction of pores larger than 5.1 nm in diameter (Fig. 3a). Equilibrium adsorption The Langmuir isotherm model agreed well with the equilibrium adsorption data of both the SGCS and UGCS (Fig. 4) based on their statistical parameters (R 2 ≥ 0.90, RMSE ≤ 0.20 mg/g). The Langmuir parameters, including the maximum adsorption capacity (q m ), affinity constant (K a ) and bonding strength (S = q m × K a ), are listed in Table 2. A number of previous studies carried out the cellulase equilibrium adsorption of lignocellulosic biomass and also observed robust adaptability of the Langmuir model [34]. For example, Machado et al. investigated the adsorption characteristics of cellulase on Avicel, pretreated sugarcane bagasse, and lignin [13]. Langmuir model isotherms were chosen to compare the kinetic properties of these various enzyme-substrate systems. Qi et al. explored cellulase adsorption of two different pretreated wheat straws and proposed a good fit to the cellulase adsorption data by the Langmuir adsorption isotherm [35]. It is difficult to directly compare the Langmuir parameters of this study to those of previous studies for different combinations for enzyme, substrate, and temperature. Zhang and Lynd collected Langmuir parameters for the cellulase adsorption of lignocellulosic biomass and observed wide variations [36]. However, the Langmuir parameters of both the SGCS and UGCS in this study can be directly compared because of the same experimental conditions. The results showed that the q m (5.61 mg/g) and K a (11.5 mL/mg) values obtained for the UGCS were much higher than those (q m = 2.83 mg/g, K a = 6.22 mL/mg) for the SGCS. These results indicated that the substrate pretreated by ultrafine grinding has a stronger adsorption capability of enzyme molecules. The reason for the high cellulase adsorption amount of UGCS may be because the ultrafine grinding achieved significant changes in the intact cellulose-hemicellulose-lignin network. More generated pores, demonstrated by a high SSA and PV distribution, increased the diffusion of enzyme molecules into the substrate. More importantly, more exposed binding sites of the substrate, demonstrated by a high cellulose and lignin surface area, improved the substrate accessibility to cellulase. The cellulase adsorption kinetic profiles of the SGCS and UGCS are shown in Fig. 5. Compared with the kinetic data of the SGCS, the adsorption amount of the UGCS at any time was much higher. This may be explained by the changes induced by the ultrafine grinding pretreatment, which yielded high SSA, PV and surface composition areas. On the one hand, high SSA, PV and surface composition values produced a large exposure area of the substrate and, thus, the binding sites of the substrate to the cellulase are also accordingly increased to achieve high enzymatic adsorption capacity. On the other hand, large pore openings produce less restriction and provide efficient adsorption of the enzyme molecules [37]. Wang et al. [11] investigated the cellulase adsorption and cellulose accessibility to the cellulase of the set of pretreated substrates with different pore volume distributions [31]. The authors found that increasing the pore volume in the substrates increases the cellulose accessibility to cellulase, which correlated well with the amounts of adsorbed cellulase. The film-pore-surface diffusion models were developed to simultaneously predict the cellulase adsorption kinetics of both the SGCS and UGCS (Fig. 5 and Table 3). The model prediction agreed reasonably well with the observed kinetic data based on high R 2 and low RMSE values (R 2 = 0.95 and RMSE = 0.16 mg/g for the UGCS, R 2 = 0.93 and RMSE = 0.09 mg/g for the SGCS). Internal diffusion is an important mass transport process during the enzyme adsorption of corn stover particles and includes pore and surface diffusion. The fitted pore diffusion coefficients (D p ) were found to be 9.45 × 10 −7 cm 2 /min for the SGCS and 6.04 × 10 −6 cm 2 /min for the UGCS. The magnitude of D p is affected by pore structure parameters, such as the pore size, porosity, and tortuosity. The ultrafine grinding pretreatment can reduce the pore diffusion resistance by changing these pore structure parameters and then enhance the pore diffusion coefficient. Compared with the surface diffusion coefficient (D s ) of the SGCS, that of the UGCS was smaller by several orders of magnitude. Surface diffusion is often described by a hopping mechanism in which the migrating particles are viewed as hopping between distinct, energetically favorable adsorption sites on the surface [38]. When an adsorbed particle obtains a sufficient activation energy, it can overcome the energy barrier between adsorption sites and jump to a neighboring site. Thus, the speed of surface diffusion depends on the bond strength of the attached sorption site and the affinity of the recipient site. The equilibrium adsorption data showed that the UGCS had a higher affinity constant (K a ) and bonding strength (S) than the SGCS. This may explain the low D s value of the UGCS. External film diffusion is another mass transfer process that is characterized by external-film transfer coefficients (K L ). The fitted K L value of the UGCS was much less than that of the SGCS. The relationship between K L and the particle size is not straightforward. Badruzzaman et al. quantified the arsenate adsorption on granular ferric hydroxide by the film-surface diffusion model and then evaluated the K L dependence on the particle size [39]. The results showed that the obtained K L values did not correlate with the particle radius. The magnitude of K L is affected not only by the adsorbent particle size but also by the adsorbentsolution system hydraulics. To measure the relative importance of external-film mass transfer to internal-pore mass transfer within the 5 Comparison of observed and predicted cellulase adsorption kinetics for SGCS and UGCS. The cellulase adsorption kinetic experiments were performed for 2, 5, 10, 20, 30, 60, 90, and 120 min with an enzyme loading of 6 mg/g substrate. The predicted values were obtained by current film-pore-surface diffusion model. Error bars represent the standard deviation of the measurements for the absorbed cellulase amount in substrate two substrates, the Biot number (B i ) was used as an indicator and calculated as B i = K L × R/D e , where K L is the external-film transfer coefficient, R is the particle radius, and D e is the effective diffusion coefficient in the internal pore. Traegner and Suidan suggested that the externalfilm mass transfer is the rate-controlling step for B i < 1, both the external-film and internal-pore mass transfer are the rate-controlling steps for 1 ≤ B i ≤ 100, and the internal-pore mass transfer is the rate-controlling step for B i > 100 [40]. The calculated B i values for the SGCS and UGCS were 92.81 and 39.57, respectively. Hence, these B i values indicated that both the external-film and internalpore mass transfer were important for cellulase adsorption on the SGCS and UGCS. However, the smaller B i value of the UGCS indicated a lower internal-pore resistance within the UGCS. The finding showed that the ultrafine grinding pretreatment significantly decreases the particle size and improves the pore diffusion properties, such as the pore size, porosity, pore volume and pore openings, resulting in less internal-pore resistance within the UGCS. Conclusions The ultrafine grinding pretreatment was executed on corn stover. The results showed that the ultrafine grinding pretreatment can significantly decrease the particle size (from 218. . The model was further employed to analyze the rate-limiting steps in the enzyme adsorption process. Although both externalfilm and internal-pore mass transfer are important for the enzyme adsorption on the SGCS and UGCS, the UGCS has a lower internal-pore resistance compared to the SGCS because the ultrafine grinding pretreatment significantly decreased the particle size and improved the pore diffusion properties such as the pore size, porosity, pore volume and pore openings. These findings identify wherein the probable rate-limiting factors for the enzyme adsorption reside and could, therefore, provide a basis for enhanced cellulose hydrolysis processes. Samples and enzyme preparation Corn stover was collected in 2013 from the Shangzhuang agronomy farm of the China Agricultural University, located in Beijing, China. The corn stover was air dried and milled to coarse particle size (approximately 1-2 cm). Then, it was dried in a forced-air oven at 45 °C for 48 h and milled to a size less than 1 mm in an RT-34 hammer mill (Rong Tsong Precision Technology Co., Taiwan). The milled material was sieved by a JH-300A sieve shaker fitted with a 40-mesh screen to obtain the SGCS samples (Jiahe Machinery Co., Henan province, China). Then, 400 g of powder was further milled using a CJM-SY-B ultrafine vibration grind mill to obtain the UGCS samples (Taiji Ring Nano Products Co., Hebei, China). The corn stover powder was mixed with ZrO 2 balls (6-10 mm diameter) in a 1:2 volume ratio for 0.5 h, and the instrument temperature was controlled below 30 °C. All powders obtained were sealed in PVC plastic bags at room temperature before use in all experiments. The celluclast 1.5 L (cellulase) was purchased from Sigma-Aldrich (St. Louis, MO, USA), and the protein content is 36.7 mg/mL. Analysis of the surface areas of cellulose and lignin The surface areas of cellulose and lignin were measured according to the literature [21]. The surface areas of cellulose and lignin on the SGCS and UGCS were analyzed by determining the monolayer adsorption maximum of Congo Red (Direct Red 28) [22] and Azure B [23], respectively. Of each adsorption, 100 mg dry material was weighed in 25 mL conical flask; 10 mL of the dye (Congo Red in 30 mM phosphate buffer at pH 6 and Azure B in 50 mM Na-phosphate buffer at pH 7) was added to the conical flasks and incubated for 24 h on a shaker at 200 rpm. Congo Red adsorption was performed at 60 °C and Azure B adsorption at 25 °C. After incubation, the liquid fraction was separated by centrifugation and the supernatant was filtered through a 0.45 μm PTFE filter. The residual dye concentration and reference solutions were determined spectrophotometrically (Congo Red at 498 nm and Azure B at 647 nm) and the amounts of adsorbed dye were calculated. The adsorption experiments were performed in duplicate using Congo Red concentrations of 4, 2, 1, 0.25, 0.05 and 0 g/L and Azure B concentrations of 2, 1, 0.5, 0.25, 0.1 and 0 g/L. The parameters of the adsorption isotherm were fitted to the Langmuir isotherm in MATLAB (Mathworks, Natick, MA, USA). Then, the cellulose surface area was calculated per dry material from the adsorption maximum with 1 g of the adsorbed dye representing 1055 m 2 surface [22]. And the surface area of the lignin was obtained from the maximum adsorption capacity and the area (1.297 m 2 /mg) covered by Azure B [23]. Particle size determination The particle size distribution was measured using an LS230 laser diffraction particle size analyzer (Beckman Coulter Inc., Miami, FL, USA). The particle measurement range is from 0.375 μm to 2000 μm. Before measurement, the samples were dispersed with distilled water to form a uniform liquid suspension and then were poured into the measurement instrument with ultrasound. An LS v3.29 system based on the Fraunhofer mode was used to measure the particle size. Specific surface area, pore size and pore volume distribution determination The specific surface area, pore size and pore volume distribution of the SGCS and UGCS were measured with the Autosorb-iQ porosity analyzer (Quantachrome Instruments, FL, USA). The samples were degassed at 80 °C for 7 h and then cooled in the presence of nitrogen gas under −195 °C, allowing the nitrogen gas to condense on the surfaces and within the pores. The specific surface area was calculated using the Brunauer-Emmett-Teller (BET) model [41], which relates the gas pressures to the volume of gas adsorbed. The pore volume distribution with respect to the pore size was estimated using the Barrett-Joyner-Halenda (BJH) model [42]. Enzyme adsorption kinetic experiments The enzyme adsorption kinetic experiments were conducted for the SGCS and UGCS with an enzyme loading of 6 mg protein/g substrate, which was among the usual enzyme hydrolysis loading capacity. The lignocellulose substrate-binding studies were performed in centrifuge tubes (10 mL) with a sodium citrate buffer (0.05 M, pH 4.8) using a 1 % (w/v) substrate concentration and incubated for 2, 5, 10, 20, 30, 60, 90, and 120 min in a shaking water bath at 4 °C to avoid hydrolysis. Every experiment was run two times, and substrate blanks without enzyme and enzyme blanks without substrate were also analyzed. After incubation, all samples were centrifuged for 3 min in a refrigerated centrifuge at 6000 rpm. The supernatant was filtered and used to determine the free enzyme by measuring the protein concentration in the supernatant using the Bradford assay by Coomassie brilliant blue dye [43]. The bound enzyme was calculated by subtracting the free enzyme concentration from the initial enzyme concentration loaded. Equilibrium enzyme adsorption experiments Different loadings of the enzyme (1.5-10.5 mg/g substrate for celluclast 1.5 L) were performed and incubated for 2 h under the same condition mentioned above. The bound enzyme concentration calculated was correlated with the free enzyme concentration using the following Langmuir equilibrium isotherm: where q b is the equilibrium amount of solid-phase bound enzyme (mg protein/g substrate), q m is the maximum solid-phase bound capacity (mg protein/g substrate), K a is the affinity constant (mL/mg protein), and C f is the equilibrium concentration of free enzyme in solution (mg protein/mL). The Langmuir adsorption constants (K a and q m ) of the SGCS and UGCS were obtained by nonlinear regression using MATLAB (Mathworks, Natick, MA, USA). The binding strength (S in mL/g substrate), another constant from the Langmuir adsorption isotherm, could be used to estimate the stability of the enzyme bound with substrates. The binding strength can be calculated by S = q m × K a . Film-pore-surface diffusion adsorption model The adsorption of cellulase onto lignocellulosic biomass involves three consecutive steps: external diffusion of the cellulase from the bulk solution across the liquid film surrounding the solid biomass particles, internal diffusion of the cellulase through the biomass particles by pore volume diffusion and surface diffusion, and the adsorption of cellulose molecules onto the biomass particles at the active sites (Fig. 1). The film-pore-surface diffusion adsorption model was proposed based on the following assumptions: (a) the adsorbent particles are spherical; (b) the adsorption rate at an active site is instantaneous; and (c) the solute adsorbed amount on the adsorbent can be represented by the Langmuir isotherm equation. The rate of mass transfer in the external film surrounding the solid particle is assumed to be directly proportional to the concentration difference in the film. Therefore, the external-film mass transfer is given by where t is the adsorption time, V L is the total volume of the liquid phase, C L is the concentration of enzyme in the liquid phase, C P,r | r=R is the enzyme concentration at the particle surface, K L represents the external-film mass transfer coefficient, and A represents the outer surface area of all the particles, estimated as: where m is the mass of all the particles, R is the radius of the particle, and ρ a is the apparent density of the particle, estimated as: where V p is the pore volume per mass of the particle and ρ s is the solid density, estimated as follows: where M c , M h , M l , and M o and ρ c , ρ h , ρ l , and ρ o are the mass percentages on a dry basis and the densities of cellulose, hemicellulose, lignin and other compositions in solid particles, respectively. Based on the mass balance equation for the adsorption of enzyme with internal-pore diffusion in a spherical particle, the following equation can be obtained: where C P,r is the enzyme concentration in the particle pores at position r, ϕ is the ratio of the accessible pore volume to the enzyme (V pa ) to the total pore volume (V p ), r is the radial position in the particle, q r is the solid-phase enzyme adsorption amount at position r, D p is the pore diffusion coefficient of the enzyme, D s is the surface diffusion coefficient of the enzyme, and ε is the porosity of the solid particle, estimated as: As the adsorption step occurs much more rapidly than the mass transfer step in physical adsorption, the pore solution concentration and the solid-phase adsorbed amount can be expressed by the Langmuir isotherm equation: Differentiating Eq. (8) yields: Substituting Eq. (9) into Eq. (6) gives: where D e is the effective diffusion coefficient in the internal pore, given as: (1+K a C P,r ) 2 into Eq. (10) gives: The average enzyme adsorption amount in the solid particles (q a ) is given by The initial and boundary conditions are listed as: The film-pore-surface diffusion model can be numerically solved by combining Eqs. (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15). The C P,r values (7) ε = V p V p + 1/ρ s (8) q r = f (C P,r ) = q m K a C P,r 1 + K a C P,r (9) dq r = f ′ (C P,r )dC P,r (10) ϕε + ρ a f ′ (C P,r ) ∂C P,r ∂t = 1 r 2 ∂ ∂r r 2 D e ∂C P,r ∂r (11) D e = D p + f ′ (C P,r )ρ a D s (12) ϕε + ρ a q m K a (1 + K a C P,r ) 2 ∂C P,r ∂t = D p + D s ρ a q m K a (1 + K a C P,r ) 2 ∂ 2 C P,r ∂r 2 − 2q m K 2 a D s ρ a (1 + K a C P,r ) 3 ∂C P,r ∂r 2 + 2 r D p + D s ρ a q m K a (1 + K a C P,r ) 2 ∂C P,r ∂r (13) q a = R 0 q r ×4πr 2 dr (14) t = 0 C L = C 0 C p,r 0≤r≤R = 0 (15) t > 0K L (C L − C P,r | r = R ) = D e ∂C P,r ∂r | r = R ∂C P,r ∂r | r = 0 = 0 can be used to calculate the average enzyme adsorption amount in the solid particles (q a ) according to Eqs. (8) and (13). The predicted q a values were compared with the observed values and were used to estimate the model parameters. The model parameters (K L , D p , and D s ) were simultaneously fitted to all experimental data using a custom-written program in MATLAB (Mathworks, Natick, MA, USA). Table 4 provides a description of these symbols.
2018-04-03T04:47:17.626Z
2016-08-30T00:00:00.000
{ "year": 2016, "sha1": "6621d47bdf987cf8fc1d4158e7764cbe8d515e99", "oa_license": "CCBY", "oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-016-0602-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6621d47bdf987cf8fc1d4158e7764cbe8d515e99", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
19019048
pes2o/s2orc
v3-fos-license
Peritoneal tuberculosis presenting with portal vein thrombosis and transudative Ascites - a diagnostic dilemma: case report Background Peritoneal tuberculosis is an important problem in regions of the world where tuberculosis is still prevalent (Chest 1991; 99:1134). Atypical presentations such as portal vein thrombosis can delay diagnosis or result in misdiagnosis (Gut 1990; 31:1130, Acta ClinBelg 2012; 67(2):137–9, J Cytol Histol 2014; 5:278, Digestive Diseases and Sciences 1991; 36(1):112–115). A high index of suspicion is required for the diagnosis of peritoneal tuberculosis, as the analysis of peritoneal fluid for tuberculous bacillus is often ineffective, and may increase mortality due to delayed diagnosis. (Clin Effect Dis 2002;35: 409-13) In light of new evidence, peritoneal biopsy through laparoscopy or laparotomy has emerged as the gold standard for diagnosis (Clin Effect Dis 2002; 35: 409-13). Case presentation We report a case of a 35 year old Sri Lankan female employed in a Middle - Eastern country who presented with progressive abdominal distention and constitutional symptoms for four months duration. She had been investigated abroad and diagnosed with ascites with chronic portal vein thrombosis following which warfarin therapy had been commenced suspecting an underlying thrombophilia. Despite treatment her symptoms had worsened. Therefore she had decided to return to Sri Lanka for further evaluation. After ruling out inherited thrombophilic states and the antiphospholipid syndrome, further investigations revealed a transudative ascites and high inflammatory markers. The tuberculosis work up on peritoneal fluid was negative. Therefore, we proceeded with laparoscopy which showed multiple nodular deposits on abdominal wall, bowel and omentum and peritoneal biopsy revealed granulomatous inflammation with caseous type necrosis compatible with mycobacterium tuberculosis infection. This was confirmed by tuberculosis genome identification on the biopsy sample confirming a diagnosis of peritoneal tuberculosis with secondary portal vein thrombosis and cavernous formation due to local inflammation. The patient was started on anti-tuberculosis treatment and warfarin was discontinued, following which she made a remarkable recovery. Conclusion Peritoneal tuberculosis can present with unusual manifestations such as portal vein thrombosis and transudative ascites causing a diagnostic dilemma. Ascitic fluid analysis is generally not diagnostic. Under such circumstances peritoneal biopsy should be performed as it has a good diagnostic yield and accuracy. Background Peritoneal tuberculosis is an important problem in regions of the world where tuberculosis is still prevalent [1]. It can present with a spectrum of clinical manifestations ranging from ascites, its typical form, to unusual presentations like portal vein thrombosis. Atypical presentations can mislead clinicians and result in delayed diagnosis or misdiagnosis [2][3][4][5]. A high index of suspicion is required for diagnosis as analysis of peritoneal fluid for tuberculosis bacillus has not only proven to be ineffective, but also it may delay diagnosis, resulting in increased mortality [6]. In light of new evidence peritoneal biopsy through laparoscopy or laparotomy has emerged as the gold standard for diagnosis [6][7][8]. Case presentation A 35 year old Sri Lankan house maid working in a Middle Eastern country had initially been investigated for progressive abdominal distension and diagnosed to have ascites with chronic portal vein thrombosis. Treatment with warfarin had been commenced suspecting an underlying thrombophilia but despite the treatment the abdominal distension worsened and she developed marked loss of appetite and loss of weight. Therefore, she decided to return to Sri Lanka for further investigation. There was no significant family history of thrombophilic conditions, personal history of thrombosis elsewhere, symptoms suggestive of systemic lupus erythematosus, the antiphospholipid syndrome or past history of intraabdominal sepsis. She had no personal history or contact history of tuberculosis or symptoms of active pulmonary tuberculosis. Colonic, breast or ovarian malignancies were not documented among family members. On examination she was emaciated with a body mass index (BMI) of 18 kg/m 2 . There was no pallor, icterus, lymphadenopathy, photosensitive skin rashes, alopecia or oral ulcers. Peripheral stigmata of chronic liver cell disease were absent and facial or lower limb edema was not present. The abdomen was grossly distended with ascites. There was no hepatosplenomegaly or other abdominal or pelvic masses. Examination of the cardiovascular, respiratory and central nervous systems was unremarkable. Investigations revealed a normal full blood count {WBC 9.18 10 3 U/L (N 75, L 12, M 9.6, E 2.5 B O.2 %), Hb -12 10 3 U/L, Plt 349 10 3 U/L} with raised Inflammatory markers; CRP −139 mg/l, ESR -70 mm/1 st hour. Liver function tests including serum albumin and coagulation profile were normal. Thrombophilic, antiphospholipid and autoimmune screenings were unremarkable. Chest X ray was normal. Peritoneal fluid analysis revealed a transudative ascites with lymphocytosis (WBC 1300/CMM, RBC 1500/CMM, Polymorphs 8%, and Lymphocytes 92 %) Serum to ascites albumin gradient (SAAG) was 1.3 g/dl. Contrast Enhanced Computed Tomography of the abdomen and pelvis revealed moderate ascites and chronic portal vein thrombosis with cavernous formation (Fig. 1), but no evidence of portal hypertension. There were no abdominal or pelvic masses. Mantoux test was positive (20 mm) but gold quantiferon assay was negative. Ascitic fluid adenosine deaminase level (ADA) was in the non-tuberculosis range and peritoneal fluid for acid fast bacilli staining and PCR for tuberculosis genome detection were negative. Peritoneal fluid culture revealed no growth. Thrombophilic conditions, intra-abdominal malignancy and sepsis having been ruled out we were left in a diagnostic dilemma. There was no evidence of tuberculous peritonitis apart from the high index of suspicion due to high background prevalence. So, three weeks after the peritoneal fluid analysis we managed to perform a laparoscopy which revealed multiple nodular deposits in the abdominal wall, bowel and omentum.Peritoneal biopsy showed granulomatous inflammation with caseous type necrosis compatible with mycobacterial tuberculosis infection. Subsequently PCR identified tuberculosis genome on the biopsy sample. Ultimately a diagnosis of peritoneal tuberculosis complicated by chronic portal vein thrombosis was made. The patient was referred to the central anti-tuberculosis treatment unit and commenced on standard anti-tuberculosis treatment regimen with fixed drug combination of Isoniazid, Ethambutol, Rifampicin and Pyrazinamide. Subsequently warfarin was discontinued .On follow up, one month later her liver enzymes were noted to be elevated. Therefore, all drugs were withheld and introduced gradually at weekly intervals with close monitoring of the liver functions. Within one month liver enzymes returned to baseline and she was reestablished on the standard regimen. Treatment was continued according to the current guidelines for a total of seven months including the month during which the drugs were reintroduced. Ultrasound scan of the abdomen performed a few weeks after reestablishing standard treatment showed resolution of ascites. She made a remarkable recovery at the end of treatment with no further complications. Discussion Peritoneal tuberculosis is an important health concern in parts of the world where its prevalence is still high. Peritoneum is an uncommon site of extra pulmonary infection and the risk is increased in patients with cirrhosis, HIV infection, diabetes mellitus, underlying malignancy, following treatment with anti-tumor necrosis factor (TNF) agents, and in patients undergoing continuous ambulatory peritoneal dialysis (CAPD) [1,9]. Infection most commonly results from reactivation of latent tuberculous foci in the peritoneum that were established following hematogenous spread from a primary lung focus [1]. Less frequently the organisms can enter the peritoneal cavity transmurally from an infected small intestine or contiguously from tuberculous salpingitis [10]. Although the patient had no direct contact history or any of the listed predisposing factors, Sri Lanka is a country with high background prevalence, and she may have had primary subclinical infection with hematogenous spread and subsequent reactivation. According to the available literature the majority of patients present with ascites and constitutional symptoms such as anorexia, fever and loss of weight [2,11,12]. It is more prevalent in females and seen more commonly in the third and fourth decades of life [2,12]. Portal vein thrombosis is a rare manifestation of the disease and has been described mainly in case reports [3][4][5]. A high index of suspicion is needed for diagnosis of peritoneal tuberculosis and it should be included in the differential diagnosis of unexplained lymphocytic ascites with SAAG < 1.1 g/ dl [11] In contrast , though lymphocytic predominant, ascites was a transudate in this case even in the absence of concomitant portal hypertension or cirrhosis. This again is unusual but the study carried out by Manohar A et al. describes such presentations [2]. Examination of the ascitic fluid including staining for acid fast bacilli is known to have a very low yield and the mortality associated with waiting for culture results has been demonstrated to be high [6]. In contrast, peritoneal biopsy either by laparoscopy or laparotomy has been proven in several studies to be the gold standard [6][7][8]. In this patient all diagnostic modalities including tuberculosis genome detection tests on ascitic fluid were inconclusive making the diagnosis a dilemma. But high index of suspicion and peritoneal sampling through laparoscopy ultimately led to the correct diagnosis. In the majority the standard treatment as for pulmonary tuberculosis leads to rapid clinical improvement [13]. Conclusion Peritoneal tuberculosis can present with unusual manifestations like portal vein thrombosis and transudative ascites in the absence of portal hypertension making the diagnosis a dilemma. Ascitic fluid analysis is generally inconclusive. Under such circumstances peritoneal biopsy should be performed as it has a good diagnostic yield and accuracy. Consent statement Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor of this journal.
2017-06-30T23:07:56.954Z
2015-09-30T00:00:00.000
{ "year": 2015, "sha1": "7626f230f2c93d3b49d2ac38e41c363dee57a725", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-015-1122-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ddc7bcebd638703e8d1a0f79832c4a6e6a7957a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119519729
pes2o/s2orc
v3-fos-license
X-ray Continuum Slope and X-ray Spectral Features in NLS1 Galaxies The idea that some of the unusual features in the X-ray spectra of Narrow-Line Seyfert 1 galaxies (NLS1s) are due to the steep X-ray continuum is tested by comparing photoionization model calculations with various observed properties of Seyfert 1 galaxies. A meaningful comparison must involve the careful use of the right X-ray ionization parameter, designated here U(oxygen). When this is done, it is found that the strength of the continuum absorption features is insensitive to the exact slope of the 0.1-50 keV continuum. It is also shown that the complex of iron L-shell lines near 1 keV can produce strong absorption and emission features, depending on the gas distribution and line widths. While this may explain some unusual X-ray features in AGN, the predicted intensity of the features do not distinguish NLS1 from broader line sources. Finally, acceleration of highly ionized gas, by X-ray radiation pressure, is also not sensitive to the exact slope of the X-ray continuum. Introduction The steep X-ray spectrum of Narrow Line Seyfert 1 galaxies (NLS1s) has been shown to be highly correlated with many of the unusual properties of these sources (e.g. Boller, Leighly and Wills' articles in this volume). It is therefore important to study the consequences of this continuum shape and its influence on the highly ionized gas (hereafter HIG) in such sources. In particular, it is interesting to test the idea that steep slope active galactic nuclei (AGN) contain HIG which is either significantly more ionized or significantly more neutral than the same component in broad line Seyfert 1 galaxies (BLS1s). If correct, this would have significant influence on the observed X-ray features and, perhaps, also on the properties of the associated UV absorption lines. This paper presents the results of new model calculations pertaining to the strength (i.e. the optical depth) of the continuum absorption features around 0.7-0.9 keV, the absorption and emission lines in the two sub-classes of Seyfert 1 galaxies, and the motion of the HIG in NLS1s. In what follows, the dividing line between NLS1s and BLS1s is defined at X-ray continuum photon slope of Γ = 2.3. A comparison of the X-ray spectrum of NLS1s and BLS1s A large number of BLS1s have been observed by ASCA, allowing a detailed investigation of the X-ray absorption features and some statistical analysis of these properties (e.g. Reynolds 1997;George et al. 1998). This, however, is not the case for NLS1s. The objects studied so far from this group are few and the signal-to-noise of most of the ASCA spectra is far inferior to the high quality spectra of the brightest BLS1s. The information available in the literature, as well as the new data presented in this meeting, allow however, a superficial comparison of the X-ray spectral properties of the two groups. In particular, it was claimed that: • Some NLS1 galaxies show a strong absorption feature at around 1 keV which is different in shape and in energy from the commonly observed O VII and O VIII continuum absorption features in BLS1s. This was interpreted as due to O VII and O VIII resonance absorption lines in a gas moving at a relativistic speed away from the central object (Leighly et al 1997). • Other NLS1s (e.g. Akn 564, see Turner, Netzer & George 1999) show strong emission near 1 keV and no sign of X-ray absorption over the ASCA energy range. The strength and energy of this emission feature is still a source of discussion. Turner et al. (1999) have suggested that it may be produced by a large number of iron L-shell lines indicating, perhaps, iron over-abundance. • The 1 keV absorption feature observed in some NLS1s is the result of a large number of iron absorption lines close to this energy (Nicastro, Fiore & Matt 1999). The strength of this feature and the relative weakness of the bound-free to O VII and O VIII absorption, are related to the unusually steep continuum in NLS1s. The following is a closer examination of these claims. The underlying assumption is that photoionization by the X-ray continuum is the sole excitation and heating source of the HIG in both classes of AGN. X-ray continuum absorption in NLS1 galaxies The idea that a steeper X-ray continuum results in a different level of ionization of the surrounding gas can be tested by photoionization models. However, we must make sure that the comparison is meaningful and the calculations reflect, indeed, the influence of the X-ray continuum. In particular, it is important to use the "correct" ionization parameter, i.e. to carefully choose the most appropriate energy range Several different ionization parameters are currently in use, e.g. the UV ionization parameter designated here U(hydrogen) (E 1 = 13.6 eV), and the X-ray ionization parameter (Netzer 1996) with E 1 = 0.1 keV and E 2 = 10 keV. The fractional ionization of O VII and O VIII, the ions contributing most to the bound-free absorption by the HIG, are determined, almost exclusively, by E > 0.5 keV photons. Hence, it is useful to define a new ionization parameter, U(oxygen), over the energy range corresponding to oxygen ionization, E 1 = 0.538 keV and E 2 = 10 keV. Extensive tests show that HIG clouds, similar in their properties to those observed in BLS1s, are hardly affected by X-ray photons with energy below E 1 . Hence, a meaningful comparison of the effect of the continuum slope is through comparing models with the same U(oxygen). It can also be shown that some combinations of different U(hydrogen) and spectral energy distribution (SED) can produce conflicting results regarding the influence of the X-ray continuum slope. Figure 1 shows a comparison of the spectra of two HIG clouds that are exposed to (a). a typical BLS1 continuum with Γ = 2 and (b). an extremely steep X-ray continuum of Γ = 2.8. U(oxygen) is the same in both cases (0.02). This is about the average value measured by George et al. (1998) for their sample of BLS1s. The softer part of this continuum is a combination of a powerlaw IR continuum and a weak UV bump. This corresponds, for the case of Γ = 2.8, to U(hydrogen) = 28 and U X = 0.4. The column density is typical of strong absorption HIG (10 22 cm −2 ), the hydrogen number density, n h , is 10 8 cm −3 (the model is insensitive to this parameter provided it is below about 10 13 cm −3 ) and the composition close to solar. As evident from the diagram, "standard slope" and "steep slope" continua produce roughly the same strength absorption features when normalized to the same U(oxygen). Thus, the X-ray continuum slope by itself is not the cause of the apparent difference in continuum absorption properties between NLS1s and BLS1s. X-ray absorption lines in NLS1s Next we test the idea that the mysterious 1 keV absorption feature is due to the conglomeration of a large number of iron L-shell lines combined with an unusually steep continuum. The absorption spectrum of such gas has been calculated allowing for various slope continua and various width lines (e.g. various values of the micro-turbulent velocity). The results indicate that the optical depths and equivalent widths (EWs) of the absorption lines are very insensitive to the continuum slope, when normalized to the same value of U(oxygen). Both BLS1 continua and NLS1 continua produce strong absorption lines over the range of interest. The observed EWs depend on the line widths, the covering factor and the turbulent velocity. For more discussion see Netzer (2000). The examples shown in Fig. 2 and 3 are for a HIG illuminated by a Γ = 2.5 X-ray continuum with U(oxygen) = 0.2 (U(hydrogen) = 143 and U X = 2.5). Fig. 2 is a pure absorption case, i.e. the 4π covering factor is very small but the line-of-sight covering factor is 1.0. The softer part of the continuum, and the other model parameters, are identical to the ones used for the previous case. As seen in the diagram, strong absorption features are indeed present. This confirms the Nicastro et al. (1999) suggestion about the origin of the 1 keV absorption feature, especially for gas clouds with large micro-turbulent velocity (200 km/sec in the case shown here). Pure thermal profiles do not produce strong enough absorption lines to explain the 1 keV feature reported by Leighly et al. (1997). We note that the present calculations are rather different from the ones presented by Nicastro et al. (1999) for the same parameters. This is true for the H-like and He-like lines as well as the the iron L-shell lines (Netzer 2000). As for the influence of the X-ray slope, a comparison of various slope continua, with the same value of U(oxygen), clearly show that this behavior is typical of both NLS1s and BLS1s. Thus the strong absorption lines and large absorption EWs are typical of the two groups of sources. X-ray emission lines in NLS1s Finally, the idea of an unusually strong emission feature near 1 keV in NLS1 spectra, was also tested. Typical X-ray lines in photoionized gas are weak with small EWs (1-10 eV for the strongest lines assuming typical HIG, see Netzer 1996). This is well below the EW observed by Turner et al. (1999) in the ASCA spectrum of Akn 564 (about 70 eV). However, the inner geometry of the source may be such that the 4π covering factor is large yet the line-ofsight is relatively clear. In this case, continuum absorption is negligible and the scattered continuum photons will be seen on top of the unabsorbed powerlaw continuum. Such an unusual geometry can appear, for example, in flat HIG systems with extreme inclination to the line-of-sight. Fig. 3 shows the theoretical spectrum resulting from such a special geometry. The cloud is identical to the one shown in Fig. 2 but the 4π covering factor is large (0.8) and the line-of-sight is clear of absorbing material. The emission around 0.9-1.0 keV is, indeed, very strong. This is especially noticeable when plotted with the low ASCA resolution (solid line). Note again that the turbulent velocity is the key factor and clouds with pure thermal profiles produce much weaker emission lines. The ionized nuclear gas in Seyfert galaxies is subjected to the intense radiation field of the central source which can produce strong radiation pressure forces. Such forces, due to the UV continuum, and their influence on the ionized gas dynamics, have been studied in detail for BALQSOs (e.g. Arav, Lee & Begelman, 1994 and references therein). Yet, little has been done so far on the acceleration of the HIG by the intense X-ray source. In a recent paper, Chelouche and Netzer (2000) investigated the physics and dynamics of HIG clouds exposed to typical AGN X-ray continua. The results confirm that such gas can be accelerated to high velocities depending on the origin of the flow relative to the center, its column density, the confining pressure and the absorption line widths. Typical velocities of 500-1000 km/sec have been obtained for gas clouds that originate just outside the broad line region (BLR); a likely location of HIG clouds. The Chelouche and Netzer (2000) calculations have also been applied to steep spectrum sources, in an attempt to check whether the gas dynamics can provide another distinguishing factor between BLS1s and NLS1s. The detailed calculations, that will be presented elsewhere, show that the less luminous steeper X-ray continua of NLS1s are as efficient as the shallower and more luminous BLS1 continua in accelerating the HIG to high velocities. Thus, the dynamics of the HIG is not likely to provide a clear distinction between NLS1s and BLS1s.
2014-10-01T00:00:00.000Z
2000-05-08T00:00:00.000
{ "year": 2000, "sha1": "1ffd8b433161c8327e7aefb30366082dc1049491", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0005142v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1ffd8b433161c8327e7aefb30366082dc1049491", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252567782
pes2o/s2orc
v3-fos-license
Deep learning based sferics recognition for AMT data processing in the dead band In the audio magnetotellurics (AMT) sounding data processing, the absence of sferic signals in some time ranges typically results in a lack of energy in the AMT dead band, which may cause unreliable resistivity estimate. We propose a deep convolutional neural network (CNN) to automatically recognize sferic signals from redundantly recorded data in a long time range and use them to compensate for the resistivity estimation. We train the CNN by using field time series data with different signal to noise rations that were acquired from different regions in mainland China. To solve the potential overfitting problem due to the limited number of sferic labels, we propose a training strategy that randomly generates training samples (with random data augmentations) while optimizing the CNN model parameters. We stop the training process and data generation until the training loss converges. In addition, we use a weighted binary cross-entropy loss function to solve the sample imbalance problem to better optimize the network, use multiple reasonable metrics to evaluate network performance, and carry out ablation experiments to optimally choose the model hyperparameters. Extensive field data applications show that our trained CNN can robustly recognize sferic signals from noisy time series for subsequent impedance estimation. The subsequent processing results show that our method can significantly improve S/N and effectively solve the problem of lack of energy in dead band. Compared to the traditional processing method without sferic compensation, our method can generate a smoother and more reasonable apparent resistivity-phase curves and depolarized phase tensor, correct the estimation error of sudden drop of high-frequency apparent resistivity and abnormal behavior of phase reversal, and finally better restore the real shallow subsurface resistivity structure. I. INTRODUCTION In AMT data processing, using data with a low S/N in the horizontal field channel will lead to biased transfer function estimates (Labson et al., 1985). Therefore, field source investigation is of great significance for AMT sounding (Egbert, 1986;Garcia & Jones, 2002), particularly at dead-band frequencies . Several studies have shown that strong seasonal and diurnal variations in global lightning activity (Price, 1993;Chrissan & Fraser-Smith, 1996;Satori & Zieger, 1996;Fiillekrug & Fraser-Smith, 1997;Watkins et al., 1998). The magnetic field signal level is usually lower than the coil noise threshold during the day. In contrast, the signal level at night is usually strong enough to robustly estimate the transfer function at AMT dead-band frequencies (Garcia & Jones, 2002). Based on this observation, Garcia & Jones (2005) proposed a new hybrid method of acquisition and processing, which solves the problem of the lack of energy in the AMT dead-band at high latitudes. Over the past few decades, much research has been devoted to sferic signals. The arrival time of sferics is related to the rate of thunderstorm dimensions (Cherna et al, 1986), then people can use the arrival time difference (ATD) technique to record the lightning activity (Lee, 1986). Such approaches, however, have failed to remove propagation effects from ground conductivity or ionospheric conductivity profiles. In order to alleviate the impact of this limitation, scholars have since proposed many new methods for precise positioning (Lee,1989). And all these ATD techniques form the basis of the World Wide Lightning Location Network (WWLLN) ( For noisy AMT data, the decomposition algorithms make use of statistical techniques such as the jackknifes and the bootstraps to constrain distorted models (Chave & Smith 1994; McNeice & Jones 2001), but do not take advantage of the high S/Ns data. With the development of AMT field source theory, much research has tended to focus on extracting limited sections of high S/Ns data for subsequent data processing (Garcia & Jones 2002) rather than on processing the poor-quality parts of data (i.e., improve S/N by precisely controlling extracted sferics with amplitudes above noise level observed in AMT time series). By averaging the extracted data at the same source time, S/N can be made proportional to the square root of the mean (Macnae et al., 1984;McCracken et al., 1986). Based on this finding, Goldak & Goldak (2001) intended to use adaptive polarization stacking to improve S/N, but failed to take into account the disturbance effect of average non-stationary sferic waveforms. found that data between sferics events is irrelevant and can be discarded, and if sferics are duplicated (or similar), theoretically, stacking them will improve S/N, as well as reducing the deviation in apparent resistivity and phase curves. Initially, scholars used statistical information such as the sample mean and variance to calculate the 90% confidence interval of the average value to determine the number of sferic needed to reach a robust mean (Leon- Garcia, 1994). However, Short distance lightning activities can produce individual transient events, the amplitude of which are obviously larger than that of low-level background field, so the S/N can be significantly improved by recording a transient source. In addition to this, several studies have revealed that processing a single sferic event based on polarization, distance and amplitude can also increase the S/N, especially in the AMT dead-band ( (Grandt,1991;Ushio et al., 2015). Nevertheless, this method picks up sferics manually through station records, that is, by recording the time of lightning activity, then measuring its distance, and finally estimating the arrival time on the time series for recognition. This method often makes data processing process more complicated and greatly increases the labor and time costs. learning to achieve superior performance. And as long as the network is well-trained, it only takes a short time in the forward prediction. Inspired by these technologies, compared with the traditional method, this paper presents a new method for recognizing sferics, which discards the complex network positioning steps and is no longer limited by station equipment, making the AMT field exploration more convenient. Specifically, we use deep CNN to recognize sferics, extract the signal with the closest waveform and the highest amplitude, and discard the data between sferic events to increase the S/N. In order to verify the performance of our method, we have carried out extensive tests on different S/Ns measured data. And the experimental results indicated that the well-trained CNN model has achieved outstanding generalization and high computational efficiency. From the sferics recognized by our CNN model, we have obtained smooth and reasonable apparent resistivity and phase curves in the AMT dead-band, thus reflecting the real subsurface resistivity structure. The remainder of this paper is structured as follows: first we start by introducing the network architecture, dataset production process and training parameter settings in Sect. 2. In Sect. 3, we describe/explain the principle and implementation of a robust impedance estimation algorithm for subsequent impedance calculations on/(data processing of) recognized sferic signals. In Sect. 4, we apply two field data applications to evaluate the proposed approach/method, and use the apparent resistivity and phase tensor to show/demonstrate the effectiveness and goof generalization of our approach/method. In Sect. 5, we discuss some possible problems and limitations. Finally, we present a conclusion in Sect. 6 to summarize our findings and discuss future directions. II. SFERIC SIGNALS RECOGNITION In this section, we aim to accurately recognize sferic signals from AMT time series. To this end, we conduct an ablation experiment to decide which architecture of deep neural network is suitable for this task, and finally choose one-dimensional VGG19 as the training network. To properly supervise the network learning process, weighted BCE loss function is used to optimize the network parameters. In addition, various quality metrics are proposed to evaluate the predicted results and demonstrate the superior performance of the network. Furthermore, we propose a data generation method that randomly generates training samples with random data augmentation at training/at the same time as training. By following this method, we effectively mitigate the potential overfitting problem caused by sample imbalance. Finally, we stop the process of training and data generation until the training loss converges. A. Network Design/Details The network in this paper is a one-dimensional variant of VGG. To obtain more realistic results, we use a weighted BCE loss function to reduce the effects of sample imbalances. Moreover, we propose multiple types of quality metrics to evaluate the predictions. The sferics recognition is a supervised learning task that typically requires a large amount of labeled data to obtain a network with excellent performance. However, it is almost impossible to make a complete point-by-point marker for an entire AMT time series, so the sferics recognition can be considered as a binary classification task. The architecture of our network is based on one-dimensional VGG (Simonyan et al., 2014). Due to its concise structure and deep network structure, VGG can control the number of parameters while obtaining more signal features, and has been widely used in many classification tasks. In this network, the inputs are mapped to the output of the classification category, where the features are expressed in low rank at the deepening network structure layer and then are fused through several fully connected layers. Lastly, the classification is completed through a fully connected layer. Network architecture The network architecture used in this paper is illustrated in Fig. 1, which consists of two parts: a feature extraction block and a classification block. The feature extraction block consists of five down-sampling blocks, each downs-ampling block consists of four convolutional layers with kernel 1×3, a max-pooling layer with kernel 1 × 2 and stride 2, and each convolution layer is followed by a rectified linear unit (ReLU) (Nair & Hinton, 2010). The feature extraction block extracts abstract data representations progressively by stacking such convolutional layers and pooling layers continuously, so as to enlarge the receptive fields of the convolutional layers and improve the performance of network feature extraction. The classification block consists of two feature aggregation blocks and a neuron for the final output. Each feature aggregation block starts with a fully connected layer, followed by a batch normalization layer (BN) (Ioffe & Szegedy, 2015) and ReLu. Our network enriches the learned feature maps by continuously deepening the channels. The network uses fully connected layers to systematically aggregate multi-channel information. The fully connected layer aggregates losslessly the feature information extracted by convolutional layers, which helps to make use of global information for accurate prediction. The max pooling layer is better at capturing changes in the time step, bringing greater local information difference while maintaining translation invariance, in which a small pooling kernel can capture more detailed information and better describe sferics waveform features, etc. Loss functions Sferics recognition is a binary classification problem, in which the class of each sample is classified according to the corresponding fixed-length sampling points. In many binary classification problems, the binary cross-entropy loss (BCE) is the most common loss function used to measure the difference between the hypothesized class and the truth class. First, in order to reflect probability distribution, network output needs to be mapped to the value range between 0 and 1 through the sigmoid activation function. Subsequent, the BCE loss is computed by the value after the sigmoid activation function. Where the Sigmoid activation function is defined as Then, BCE loss can be mathematically defined as where is the batch size, ℒ is defined as where represents the output of the network and is converted to class probability after the sigmoid activation function (·). represents the ground truth of binary labels, with a value of 0 or 1. represents the number of all samples. However, sferics recognition is a sample imbalance task, and this problem will force the network to be biased towards learning more non-sferic (negative sample) features, which will eventually lead to more true positive samples being predicted as negative samples. For subsequent impedance estimation, it will not be able to extract enough sferics to obtain robust results in the dead-band. In order to solve the sample imbalance problem, we update the loss function to the weighted BCE loss. Mathematically, BCEW is defined as where β represents the ratio between non-sferic samples and the total samples. 1 − represents the ratio of sferic samples. Quality metrics When training the network for sferics recognition, we use a series of metrics, including Accuracy, Precision, Recall and F1 score, to quantitatively evaluate the predicted results from multiple perspectives. The mathematical meaning of each symbol is shown in Table I = + + + + where represents the ratio between correctly classified positive samples and the total samples, represents the ratio of the true positive samples in the predicted positive samples, represents the ratio of the predicted positive samples in the true positive samples, 1 represents a "balance point", which is a weighted harmonic average of P and R, with more focus on the lower of the two. B. Training Data Sets Before training a model for sferics recognition, we need tremendous amounts of AMT time series data for labeling. Considering the influence of different regions on the distribution of field source, we use the data from three surveys (i.e., Tibet (high S/N), Nanjing (medium S/N) and Wuhan (low S/N)) to make a training set. we follow a principle provided by Hennessy et al (2018) to select the window with amplitude significantly higher than the background noise level and wave shape closest to the large amplitude sferic as a positive sample. Besides, we implement a data augmentation scheme to further enlarge the training data set and improve the model generalization. Data generation To properly train the network, we develop a workflow ( C. Training Details As described on the previous section, we generated 30,000 positive samples for training. To avoid any uncertainties associated with the sferics waveform changes between different surveys and acquisition times, each input sample is mapped to a standard normal distribution with mean 0 and variance 1 by the following formula * = − (9) where * is the normalized input sample, and are the mean and standard deviation of each channel within the imput sample, respectively. We then preprocess all the input samples by data augmentation discussed before. Without losing the sferics as much as possible, we set the sample length = 240 and the sample mask radius = 36 . We train our model with optimizer (D. P. Kingma, 2014) and set the parameter 1 = 0.9, 2 = 0.999, = 10 −8 . The learning rate is initialized to 0.001, and we use a linear learning rate schedule to decrease gradually the learning rate to slow down parameter updates. When the validation metric stagnates for 30 epochs, the initial learning rate is reduced by a factor of 0.5 (Fig. 5c) We can see that the loss curves for both training and validation gradually converge to less than 0.01 and 0.08, and the accuracy curves gradually stabilize to 97% and 95% after 40 epochs when the optimization stops. According to these data, we can infer that the network already had/got the ability to accurately distinguish between positive and negative samples. To demonstrate the remarkable performance of the trained network, we first randomly select the time series data of a station in the test set as the input of the network to verify the classification results. The test results are shown in Fig. 6, where Fig. 6(a) III. RECOGNIZED SFERIC SIGNALS FOR IMPEDANCE ESTIMATION Before calculating impedance, a preprocessing of time series data is usually required. To avoid sferics from different storm systems with different waveforms contaminating data within a predicted sferic window, we perform waveform correlation filtering for each sferic ensemble, and those with correlations less than a threshold (0.7) will be discarded (Hennessy, 2018). Moreover, time shifting each sferic to align with the ensemble average waveform to correct timing errors. Finally, in order to compare with traditional impedance estimation methods, we use a robust M-estimator to calculate apparent resistivity and phase of the AMT field data preprocessed by our proposed method. The following sections of this part is a brief description of the basic principles and procedures of AMT impedance estimation. A. Transfer Functions The AMT data analysis starts with estimation of transfer functions between electric field and magnetic field (i.e., the impedance tensor) (Sims et al., 1971;Vozoff, 1972). A common practice is to perform a Fourier transform on the AMT field channel time series ( ( ), ( ), ( ), ( )) to obtain frequency domain data ( ( ), ( ), ( ), ( )). When the electromagnetic field is a plane wave or zero-wavenumber model, the electromagnetic field component satisfies the following dual-input, dual-output linear system: When there is noise, it can be written in the following vector form: where and denote the horizontal components of the electric and magnetic fields, respectively; denotes the impedance tensor, which can be regarded as a transfer function to be estimated in [ /( × )]; denotes the errors. Afterwards, denotes an operator, for example, ≃̂= ( , ). denotes the residual, and calculated from: The definition of the operator is not unique in the presence of noisy data. The next section describes the definition and computation of the operator on M-estimator. B. M-estimator To minimize the effect of data related to large in regression, Egbert & Booker (1986) introduced a robust technique, called M-estimator, which is considered as one of the most efficient transfer function estimation algorithms (Chave & Thomson, 1989). As long as one remote station is not polluted by correlated noise, it can produce unbiased AMT estimates. The basic idea of M-estimator is to achieve robust estimation by controlling the influence function, so as to reduce the influence of extreme outliers on the estimation results. Compared with the Standard Least Squares estimation, M-estimator can automatically weight the observed data to reduce the influence of outliers on impedance estimation. Mathematically, the impedance ̂ is defined by following relation: where is the nonlinear weighted least squares operator; (̂) is a weighted diagonal matrix that depends on the residuals, given by , �̂� = ( �̂�) where is the scale value, which determines the value of the residual that needs to be weighted down; is a weight function whose purpose is to weaken the influence of large residuals. In practice, impedance ̂ can be estimated iteratively by The above equation converges when the weighted residuals sum of squares ( ) changes below a custom tolerance (i.e., 1%). However, the equation does not necessarily guarantee convergence in general, but there are some weighting functions which can make ̂ converge robustly independently from initial value. Two key parameters of the M-estimator method: the scale value and the weighting function . Firstly, the scale value must be estimated stably, and Chave et al. (1987) discussed the estimation method of ̂= (17) where and are the sample and theoretical values of the median absolute deviation (MAD), When the residual presents a chi-square distribution, = 0.44845; and when it presents a normal distribution, = 0.6745. There are different weighting functions to choose from in M-estimator, the common ones are Huber weighting function and Thomson weighting function (Holland & Welsch, 1977) Robustness and stability determine the weight function . Compared with the Huber function, the Thomson function is more robust but does not guarantee stability. Therefore, we here use the M-estimate with the Huber weighting function as a good initial value for the Thomson weighting function to overcome the instability. In summary, the basic principle of the impedance M-estimate is to use iterative weighted least squares to estimate the regression coefficient, then determine the weight through the residual and scale of the previous step, and iterate repeatedly to improve the weight coefficient until the variation of the residuals is lower than a tolerance. C. Discrete Fourier Transform Computation The field channel time-series data of duration ( ) can be expressed as the following relationship = where, denotes the equally sized and evenly spaced time windows split from the time series; denotes the target frequency, denotes the number period, then the duration of one window can be expressed as ; denotes the window overlap ratio, and this shifting can increase or conversely decrease the correlation between windows. Subsequently, the number of windows can be determined by = When time series are divided into time portions, we can use the Slepian data taper windows with time bandwidth ( =1,2,3 or 4) to calculate DFTs. In order to analyze the processing effect of proposed method conveniently and intuitively. For stations, we mainly compare the improvement of power spectrum, apparent resistivity and phase curves in AMT dead-band. The property of AMT compels that the observed response function on the Earth's surface must vary smoothly with frequency, and we cite this increase in smoothness as strong evidence of the superiority of the data processing method (Garcia, 2005;Booker, 2014). The phase tensor ellipse graphically reflects how the phase relationship varies with polarization, with the major axis of the tensor represented by the major and minor axes of the ellipse (Caldwell, 2004). For survey, the evaluation indicators selected are mainly the phase tensor pseudosection and the apparent resistivity and phase pseudo-section in the dead-band. IV. FIELD DATA APPLICATIONS Phase tensor pseudo-sections provide information about the directionality of the regional resistivity structure. Apparent resistivity and phase pseudo-section can more intuitively reflect the subsurface resistivity distribution. The above-mentioned indicators are the main parameters of distortion (Hennessy, 2017). A. Case Study One--Nanjing The first field dataset was acquired from Nanjing city, Jiangsu province, Table III that each metric also achieves great results on the test set that our model has never seen before, which strongly demonstrates the excellent generalization of our model. Fig. 8(a)-(b) show the power spectrum comparison of station 001. As can be seen from Fig. 8(a) that there are Industrial current interferences (fundamental frequency is 50hz) in the original time series, which appear as fixed frequency peaks on the power spectrum. Fig. 8(b) verifies that our approach is able to suppress strong noise to restore real data spectrum, and to better highlight the main frequencies (1khz, 1.8khz, 2khz, 2.5khz, 3.9khz and 4.5khz) of sferic signals to compensate for the lack of natural field energy in the dead-band (1.5 khz-5khz) Fig. 9(a) shows that the apparent resistivity and decays abnormally with a degree of two orders of magnitude between multiple frequency points at dead-band frequencies, which obviously violates the objective law that the underground resistivity structure must vary slowly. And the abnormal deviation of the phase curve and the confidence interval of the error bar indicate a serious impedance estimation error in AMT dead-band. Fig. 9(b) shows the apparent resistivity and phase curves calculated using our method. The apparent resistivity and recovers to its normal value, and three curves are smoother and have smaller standard errors, restoring better consistency between adjacent stations. Correspondingly, the phase tensor ellipse also shows great depolarization properties, with its major axis recovering the lateral variation of the real subsurface resistivity structure. All the findings above verify the effectiveness of our method. For this survey, Fig. 10(a) shows that there is severe electric field distortion on the original phase tensor pseudo-section, which usually changes the polarization direction of the local electric field, thus causing the observed response is distorted by the local conductivity inhomogeneity (Caldwell, 2004). By computing the phase tensor for all 26 combinations of synchronous stations, as shown in Fig. 10(b), the pseudo-section preprocessed by our method exhibits smoother frequency variations at dead-band frequencies, while largely eliminating the distortion of the observed impedance tensor, and recovering the graphical representation of the tensors involved in the galvanic distortion of a 2D impedance tensor. Fig. 11(a) shows that a common problem in this survey, that is, the lack of AMT field energy leads to a misestimation of impedance in the dead-band. Specifically, both apparent resistivity and present a low resistance between 10 −4 (s) and 10 −3 (s), which is shown as a dark brown long axis along the transect, and the occurrence of rare anomalous phase behavior (phase reversal) at 56, 61, 66 stations. Comparing Fig. 11(a) highlights the superiority of our proposed method between 700 Hz and 10 kHz. The apparent resistivityphase pseudo-sections in Fig. 11(b) tends to be smooth, and the anomalous behavior of low resistance in the dead-band and the estimation error of the phase reversal are corrected, which further restores the real subsurface resistivity structure. Although the structural patterns and waveform features are different from the training data. But overall, the results above indicate that our method works well on this field data example and show a significant improvement in AMT dead-band. working at the same time, which make the time-series waveform more complex. In fact, the second real data example is the most challenging one, and robustly obtaining accurate apparent resistivity and phase of this field data remains a challenge for existing methods. Fig.12 shows that field data acquired at station41 present more energetic background noise interference in the time series, the existence of these disturbances brings great challenges to data interpretation in the dead-band. What is surprising is that our well-trained model is able to accurately recognize sferic signals even in the face of such complex time-series waveforms. This finding further supports the idea of the strong expressiveness of our CNN model and verifies the robustness of model against data noise. Table IV shows the classification quality metrics on the test set of Wuhan survey. Affected by low S/N data, the average accuracy of model classification in Wuhan survey is nearly 2% lower than that of Nanjing survey. Fig. 13(a)-(b) show the power spectrum comparison of station41. Fig. 13 (a) shows that the station is severely disturbed by cultural interference, with the noise signal predominant from 1khz to 6khz, particularly. Fig.13(b) shows that after processing by our method, the real frequency of the nature field can be basically recovered from the power-line/industrial interference. However, due to the few sferic events acquired and the existence of strong interference sources (i.e., power lines), we can still see that there is a lack of field source energy and a few residual harmonic disturbances in the dead-band. The difference between the minimum and maximum of the curve is close to 4 orders of magnitude, error bars and phase tensor ellipses also indicate a breakdown of robust impedance estimation methods caused by strong noise interference, especially in station46. Fig.14(b) shows the results after processing by our method. It can be seen that the above problems have been properly resolved. The apparent resistivity curves eliminate spurious electrical structural changes and restores smoothness, and the phase curve corrects the abnormal offset to restores good consistency, although the error bar and phase tensor ellipse still slightly affected by interference. Fig. 15(a) shows that the original data is severely disturbed by polarization along the transect, especially in station26-34, station41, station43, and station46. Fig.15(b) shows that the phase tensor pseudo-section is partially recovered from the distortion of the phase tensor ellipse caused by strong noise interference, but there is still room for further improvement. From Fig. 16(a), we can see the obvious anomalies of apparent resistivity mutation and phase reversal, which is specifically manifested in the above-mentioned stations. Fig. 16(b) shows that our method is still effective in the face of low S/N data, and the processed pseudo-section is smoother and more reasonable, which is consistent with real subsurface resistivity structure. Finally, the anomalies present in Fig.16(a) Therefore, if there is a large interference in the acquisition time or within the survey area, we recommend using a variety of advanced exploration methods and anti-interference algorithms for preprocessing to obtain a high S/N time series. Theoretically, the results achieved using hybrid methods will be better than using our method only. In view of the fact that our model has achieved a high sferics recognition accuracy on the existing dataset, in the follow-up work, we plan to use semi-supervised learning methods to automatically label data, and acquire generally representative data to enlarge the training set. In this way, we can further reduce the time cost of manual annotation and achieve highprecision recognition of signals under complex conditions. The training parameters used in this paper are not optimal, and we strongly recommend setting your own parameters tuning strategy according to the actual situation. VI. CONCLUSION The main goal of the current study is to obtain superior impedance estimation in the AMT dead-band. In this paper, we focus on the recognition and extraction of sferics, and provides a simple and effective method. Specifically, we propose a novel CNN-based method for sferic signals recognition. Without any manually interception of time series, the well-trained network can automatically recognize and extract sferic segments from original time series. The proposed network is a one-dimensional variant of VGG, and we employ random sampling windows to generate samples, add random noise for data augmentation, and finally use weighted BCE as the loss function to optimize network parameters during training. The application of two field data examples verifies that the trained CNN model not only performs well on data not included in training, but also reached excellent robustness and generalization on data with different S/Ns. Multiple examples demonstrate that proposed method effectively solves the problem of lack of energy in the AMT dead-band, also eliminates the current distortion effect displayed by the phase tensor pseudo-section, and corrects the abnormal distortion present in the apparent resistivity-phase pseudo-section. By using our proposed method, we obtained smoother apparent resistivity-phase curves at dead-band frequencies (1.5-5 kHz), which restores the real subsurface resistivity structure. These results of this study have a number of important implications for future practice such as from mineral resource exploration to geothermal energy production. However, the generalizability of these results is subject to certain limitations. The main limitation of the proposed method discussed in this study is the S/N of the time series data. When data S/N is low, the sferic segments containing strong noise extracted by our method will reduces the stacking window of the impedance estimation but without improving the data S/N. Further research should be undertaken to explore how to properly combine advanced recognition and denoising algorithms to achieve high-precision extraction of field source main signals. Furthermore, we want to complicate our data generation workflow by increasing the diversity of sferic waveforms to obtain more realistic waveform characteristics and data distributions. By doing so, it is expected to improve generalization on more field data examples and refine the overall workflow. Data Availability Statement The field data used in this study could be obtained by direct request to the corresponding author Rujun. Chen.
2022-09-29T06:42:27.489Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "41ed08c6588844bfb06e2b9913c39d5104c28ba0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "41ed08c6588844bfb06e2b9913c39d5104c28ba0", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
20343846
pes2o/s2orc
v3-fos-license
Chromatin Structure EVIDENCE THAT THE 30-nm FIBER IS A HELICAL COIL WITH 12 NUCLEOSOMES/TURN* Sedimentation analysis has been used to compare the structure of 30-nm chromatin fibers, isolated and digested under conditions that maintain the native struc- ture, with relaxed-refolded chromatin. The native chromatin fibers show sharp, ionic strength-dependent changes in sedimentation coefficient that are not ap- parent in relaxed-refolded fibers. The first transition at approximately 20 mM ionic strength reflects the organization of the 10-nm polynucleosome chain into a loose helically coiled 30-nm fiber. Between 20 and 60 mM ionic strength there is considerable interaction between nucleosomes within the coils to generate a stable helical array with 12 nucleosomes/turn. Above 60 mM ionic strength the helical coil continues to con-dense until it precipitates at ionic strengths slightly greater than those considered physiological, indicating that there is no end point in fiber formation. The data is incompatible with a solenoid model with 6 nucleo-somes/turn and also rules out the existence of a beaded subunit structure. structure original Finch Sedimentation analysis has been used to compare the structure of 30-nm chromatin fibers, isolated and digested under conditions that maintain the native structure, with relaxed-refolded chromatin. The native chromatin fibers show sharp, ionic strength-dependent changes in sedimentation coefficient that are not apparent in relaxed-refolded fibers. The first transition at approximately 20 m M ionic strength reflects the organization of the 10-nm polynucleosome chain into a loose helically coiled 30-nm fiber. Between 20 and 60 mM ionic strength there is considerable interaction between nucleosomes within the coils to generate a stable helical array with 12 nucleosomes/turn. Above 60 mM ionic strength the helical coil continues to condense until it precipitates at ionic strengths slightly greater than those considered physiological, indicating that there is no end point in fiber formation. The data is incompatible with a solenoid model with 6 nucleosomes/turn and also rules out the existence of a beaded subunit structure. Three types of models have been proposed for the structure of the 30-nm chromatin fiber, the original solenoid model of Finch and Klug (l), the superbead or nucleomer model (2,3) and, more recently, a number of models that are all based upon a helical coil arrangement of the nucleosomes (4-10). Although many different sources of chromatin have been used it is apparent that the above models have been derived from distinctive methods of chromatin preparation. Thus, the solenoid model was proposed from work carried out on refolded chromatin, that is the native fiber was allowed to completely relax at low ionic strength and then refolded in the presence of cations. Superbeads, on the other hand, are only seen in sucrose gradients or in the electron microscope when chromatin is prepared and digested at certain intermediate ionic strengths, typically 40-60 mM monovalent cation, whereas the various helical coil models have evolved from work carried out predominantly on either native chromatin or fibers that were never exposed to extremely low ionic strengths. It is conceivable, therefore, that the different models are a reflection of artifactual changes introduced into the fiber structure. Alternatively, data obtained from partially unfolded, or refolded fibers may not be representative of the fiber in vivo. Quite clearly, to be acceptable a model derived from studies carried out in one set of conditions must be able to explain how the fiber behaves under all conditions. * This is National Research Council Publication No. 27793. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Sedimentation analysis is one of the few solution techniques that permits chromatin to be studied over a wide range of ionic strengths and has been used to provide data that is considered to support each of the above models (2-5, [8][9][10][11]. In this paper we have used this technique to study the changes of both native and relaxed-refolded liver chromatin over the entire range of ionic strengths (10-170 mM monovalent cation) that effects the 10 c, 30-nm fiber transition. The data is not consistent with either the solenoid or the superbead models for the structure of the 30-nm fiber, but is compatible with a helical coil arrangement containing 12 nucleosomes/ turn. MATERIALS AND METHODS Nuclei were isolated from rat liver in 0.25 M sucrose, 50 mM Tris-HCl (pH 7.51, 150 mM KCl, 5 mM MgClZ, and 0.2 mM phenylmethylsulfonyl fluoride as described previously (12,13). The nuclei were resuspended at 0.8-1.0 mg of DNA/ml in 5 mM Tris-HC1 (pH 8.2 at 30 "C) containing 0.2 mM phenylmethylsulfonyl fluoride together with the concentrations of monovalent cations described in the text. Micrococcal nuclease (Sigma or Pharmacia Biotechnology Inc.) at a concentration of 50 units/ml (13) was added and the nuclei digested at 30 "C for 5-15 min. The reaction was terminated by the addition of EDTA to 1 mM followed by rapid cooling in ice water. The suspension was then centrifuged at 25,000 X g for 15 min at 5 "C to generate a supernatant containing 30-40% of chromatin as soluble oligonucleosome fragments. Sedimentation coefficients of the oligonucleosomes were determined by layering 0.5 ml of the supernatant (1-5 AZM) units) onto a 8-35% (w/w) sucrose gradient (prepared in 5 mM Tris-HC1, pH 8.1, at 5 "C, 1 mM EDTA and the KC1 concentration indicated in the legends). The gradients were centrifuged at 5 "C in a SW40 rotor at 40,000 rpm to preset WZt values of either 2 X 10" or 5 X 10" radiansz/ s (195 and 480 min, respectively) in a Beckman L8-70 ultracentrifuge. The gradients were fractionated into 0.7-ml fractions (13) for refractive index determination and DNA size analysis. DNA was extracted from the gradient fractions and electrophoresed as described previously (13). Each set of gradient fractions on the gels was flanked by a series of DNA gel markers (1-kilobase and 123-base pair ladders and a Hind111 digest of X DNA, Bethesda Research Laboratories). Gels were visualized and photographed immediately after electrophoresis (13) and 8 X 10-inch negatives were produced and scanned on a Beckman DU-8 spectrophotometer. Because the relationship between log (DNA size) and distance migrated is seldom linear over large size ranges, the spectrophotometer was used solely to accurately determine the position of standard and unknown DNA bands. A standard curve was then drawn manually and the DNA size of unknown fragments was determined from this curve. Calculation of the Data and Experimental Rationuk-Sedimentation Coefficients ( s~~,~) were determined for each gradient fraction, assuming a particle density of 1.5, using a computer program modified from that described by Young (14) to evaluate the expression: where 7 = viscosity, p = density of particle ( p ) , sucrose (m), and 12223 water (w) a t 5 and 20 "C, and r = radial distance. The left hand integral was either 2 X 10" or 5 X 10" for short and long runs, respectively, and the right hand integral was evaluated by integrating the density and viscosity values for each successive fraction. The mass of chromatin in each fraction was evaluated in one of two ways. Oligonucleosomes in the size range 10-80 nucleosomes were resolved on the gradients run to a w't value of 2 X 10". The weight average number of nucleosomes (A) was obtained from the mean size of DNA in each fraction taking 200 base pairs as the DNA content of a liver nucleosome (11). Smaller oligomers (1-12 nucleosomes) were resolved directly on gradients run to a w't value of 5 X 10". The sedimentation coefficient of a chromatin oligomer is related to its mass by the Svedberg equation, smg.ur = (ppp.,)d'/lSq,), which defines the sedimentation coefficient in terms of the physical characteristics of the macromolecule. Thus the rate of sedimentation of a particle is proportional to the square of its diameter, d (15), and for a spherical particle with volume (V) = 47rr3/3, sedimentation is therefore proportional to and, since mass (M = V . p p then sm.u,a M":'. For nonspherical macromolecular polymers this formula is generalized to sn M" with the value of a being a function of the shape of the macromolecular complex (16,17). For example, a compact cluster of oligomers approaches the theoretical maximum value of a = 0.667, whereas a more open, flexible coil or chain of oligomers, such as polyribosomes, has a value of 0.5-0.55. A random coil on the other hand has a value of a = 0.2-0.3 (16). Since a can be evaluated from double logarithmic plots of sm., versus M this approach can be used to gain insight into the changes in structural organization that occur as the 10-nm chain folds up into, or is generated from, the 30-nm higher order fiber. Gradient Profiles of Chromatin Fragments-The amount of chromatin released from nuclei by micrococcal nuclease in a given time period is dependent upon the concentration of monovalent cation (13,18) and in these experiments the digestion time was adjusted so that a minimum of 30-40'35 of the chromatin was solubilized. Typical optical density profiles for both long ( J t = 5 x 10" radians2/s) and short (u2t = 2 X 10" radians'/s) gradient runs are shown in Fig. 1. The gradients were run a t either 10 or 100 mM ionic strength. In the long runs the actual distance migrated was determined directly from the position of peaks for oligomers up to 10-12 nucleosomes (Fig. Ut). Chromatin fragments containing more than 13-14 nucleosomes were pelleted under these conditions. Furthermore, these gradients easily detected the salt-induced compaction of the oligomers. The shorter runs (2 x 10") permitted the determination of sm,". values for fragments containing up to 80 nucleosomes (Fig. 1B). Because of difficulties in determining accurate mass average values in fractions containing less than 10-12 nucleosomes these gradients were only used to determine sm.., values for oligomers in the range 12-80 nucleosomes. Thus between the two runs we were able to accurately determine s20.u, values for particles containing from 1 to 80 nucleosomes. Although fragments spanning this entire size range were present in digests at low (10 mM) The number ( I ) refers to the position of mononucleosomes and sedimentation is from left to right. and high (100 mM) ionic strengths there was a marked difference in the size distribution (Fig. 1B). At low ionic strength the peak was a t approximately 12-14 nucleosomes, whereas a t high ionic strength it was 25-30 nucleosomes suggesting that the peak is a function of ionic strength and is not determined solely by the number of nucleosomes in a superbead (19). Size analysis of the DNA in fractions from long and short runs are shown in Fig. 2. The data in Fig. 2.4 confirmed that each successive peak in the APm profile corresponded to an increment of 1 in oligomer size. For the short runs (Fig. 2B) each fraction contained a discrete subset of oligomer sizes for which a mass average could be determined by densitometry. In addition, this gel also shows that the fast-sedimenting material consisted of long nucleosome oligomers rather than aggregates of smaller particles. Sedimentation Analysis of Long Oligomers-The ionic strength-dependent changes in chromatin fiber structure were studied on both native chromatin in the process of unfolding (Fig. 3A) and chromatin that had been previously relaxed by exposure to low ionic strength buffers (Fig. 3B). In all these experiments the ionic strength in the gradient tube was identical to that of the digestion buffer. The data is presented as double logarithmic plots of s20.e versus ri. For native chromatin there was, a t each ionic strength, a linear relationship between s~.~, and the mass of the oligomers satisfying the equation S~.~C Y Ma. Exposure to lower ionic strengths displaced the line downwards which indicated that the 30-nm fiber was becoming less compact. In addition, there were pronounced changes in the slope of these lines (i.e. the value of a in the above equation) particularly a t higher ionic strengths. At all ionic strengths greater than 20 mM the lines converged and intersected a t a point corresponding to 11-12 nucleosomes. The sedimentation behavior observed above is independent a t 506, 1,018, 1,635, 2,036, 3,054, 4,072, and 5,090 base pairs and the X digest has bands at 560,2,000,2,300,4,400, 6,700,9,400, and 23,100 base pairs. taining from 10 to 60 nucleosomes showing a linear increase in szo,w with ionic strength. This latter data is similar to that described by Butler and Thomas (11) for refolded liver chroi-8 matin. of the extent of digestion at any given ionic strength. Thus although the size distribution of the fragments decreases (see Fig. 2 of Ref. 20) as digestion proceeds, the sedimentation properties of oligomers of a given size remain the same (data not shown). Since as much as 8045% of chromatin can be released during these prolonged digestions the data reflects the sedimentation behavior of bulk chromatin rather than that of specific subsets released differentially at the various ionic strengths. When similar experiments were performed on chromatin that was relaxed by exposure to an ionic strength of 5 mM prior to digestion and sedimentation analysis the results were quite different (Fig. 3B). The lines were displaced upwards as the fibers became more compact with increasing ionic strength, but the slope of the lines showed only modest increases and no point of convergence was evident. The differences between refolded and native chromatin were more evident in a plot of szo,w uersus ionic strength (Fig. 4). Long oligomers (ri = 20-60) of native chromatin produced triphasic curves with pronounced breaks a t 60 and 20 mM ionic strengths (Fig 4A). For oligomers consisting of 6 or 10 nucleosomes the transition at 60 mM was absent and their sedimentation coefficients were independent of ionic strength above 20 mM. Oligomers of refolded chromatin, on the other hand, behaved quite differently (Fig. 48) with particles con-from 60 to 20 mM produced only a small change in the vaiue a, but below 20 mM the value decreased markedly towards values typical of a random coil. Also included in Fig. 5A are values of the exponent a for a chromatin sample (52% of total nuclear chromatin) which was isolated and digested at 120 mM ionic strength and then adjusted to, and centrifuged at, lower ionic strengths of 80, 60, 25, and 10 mM. The close correspondence between these values and those obtained as described above where the ionic strength was lowered prior to digestion eliminates the possibility that digestion at different ionic strengths releases subsets of chromatin with different sedimentation properties. The equivalent data for refolded chromatin is shown in Fig. 5B. There was a sharp increase in the value of a as the ionic strength was increased to 20 mM indicating that the relaxed oligomers were taking on a more organized higher order structure. However, above 20 mM there was only a gradual increase in slope with no indication of a transition at 60 mM. Superimposed on the native chromatin data in Fig. 5A is the pattern of change in sensitivity of the fiber to a 5-min incubation with micrococcal nuclease. This data is similar to that observed previously by us (13) with additional data at lower ionic strengths. These changes in sensitivity complement the structural transitions that were indicated by the The values of the exponent a were obtained from regression analysis of plots of log (smJ uersus log (ri) for data obtained as described in the legend to Fig. 3 -70 nucleosomes (B). The data in A was obtained in gradients run to a u2t value of 5 X 10" radians2/s. I, ionic strength. changes in the value of a. Thus as the fiber folded from the relaxed 10-nm fiber into a helical coil there was a decrease in nuclease sensitivity. Between 20 and 60 mM there was an increase in nuclease sensitivity which appears to be related to structural rearrangements of the linker DNA (13) as the nucleosomes become organized into a helical array. Finally, as the fiber underwent further compaction above 60 mM ionic strength there was a concomitant decrease in the accessibility of the fiber to the nuclease. Sedimentation Analysis of Short Oligomers-Sedimentation coefficients of monomers-dodecamers of native chromatin exposed to various ionic strengths were estimated directly from their peak positions in short runs (Fig. 6A). The sedimentation coefficients of the oligomers were relatively insensitive to decreases in ionic strength down to 20 mM. Below this concentration there were sharp decreases in the s20,m value indicating a substantial change in the compaction of the oligomers. Furthermore, the relationship between s~~,~ and it was linear in the range 1-12 nucleosomes at all ionic strengths with no indication of a break in the curve at 6 nucleosomes. Indeed, when the data for short oligomers is combined with that for longer fragments (Fig. 6B) then a pronounced change in slope becomes evident at 12 nucleosomes particularly at lower ionic strengths. DISCUSSION Rate zonal centrifugation has a number of advantages over the more conventionally used analytical ultracentrifuge for the analysis of the sedimentation characteristics of chromatin. It permits the simultaneous analysis of the behavior of a whole range of oligomer sizes at as many as 6 different ionic strengths. The data generated is, therefore, a function of the behavior of the whole population of molecules in solution and not just an average of all the molecules in the digest, a problem that is often encountered in solution studies (21). Furthermore, for small oligomers (ri d 12) the s~~,~ values are determined for each oligomer directly rather than using the mass average of a previously fractionated sample (11). In our hands, mass averages could never be accurately determined for mixtures of small oligomers. In addition, sedimentation analysis can be carried out over the entire range of ionic strengths that effect the 10 c . * 30-nm transition, especially at the higher, more physiological ionic strengths (-150 mM). Quite clearly, refolded oligomers do not have the same sedimentation characteristics as native fibers. Thus while exposure to increasing ionic strength promotes compaction of the relaxed polynucleosome chain, the 20 and 60 mM transitions are either less marked or absent and the refolded fibers do not appear to reach the same level of fiber condensation as native fibers (compare the values of a in Fig. 5). Therefore, while we were able to reproduce the sedimentation data of Butler and Thomas (11) for refolded chromatin we believe that it cannot be used in support of a solenoid model for the native 30-nm fiber. The folding and structural organization of the 30-nm fiber can be conveniently examined in 3 stages based upon the changes in the value of a (Fig. 5A). The value of a decreases rapidly below 20 mM towards values consistent with a random coil of polynucleosomes (16). Electron micrographs of chromatin at these ionic strengths also showed a relaxed fiber (5,22,23). At approximately 20 mM ionic strength the nucleosomes become organized into a loose, irregular helical array (5,22) with fiber dimensions that already approximate those of the native 30-nm fiber. A value for a of 0.45 indicates that this structure also exists in solution. The value of a increases only slightly as the ionic strength changes between 20 and 60 mM. However, there are indications that there is considerable internal organization of the fiber at this time, presumably induced by charge neutralization of linker DNA. First, the fiber becomes less sensitive to exogenous nucleases at sites within each loop (13,20). This leads to the preferential release of oligomers containing approximately 12 nucleosomes (Figs. 1 and 2, see also Ref. 20). This observation forms the basis of the nucleomer or superbead models (2,3,19) but is only observed at these ionic strengths. Electron micrographs of the fiber also show considerable irregularity at this ionic strength (5,22). Second, there is a concomitant increase in the sensitivity to proteases of the extended "arms" of histone H1 (24)(25)(26). Third, there are changes in the birefringence properties of the fiber indicating a change in the orientation of the nucleosomes relative to the fiber axis (24). All of this evidence suggests that in this ionic strength range the nucleosomes become organized into loops with about 12 nucleosomes in a loop. Each loop is stabilized by H1-H1 interactions and the linker DNAs between nucleosomes within the loop are protected from exogenous nucleases which accounts for the release of fragments containing predominantly 12 nucleosomes. Above 60 mM ionic strength the fiber looks relatively smooth in the electron microscope (5,22) but since the value of a continues to increase it appears that in solution the helical coil continues to compact until the fragments eventually precipitate. Thus, while electron micrographs of liver chromatin (22) indicate no overall change in fiber dimensions above 60 mM ionic strength, it cannot be concluded (11,22) that the fiber observed in vitro at 60 mM represents the native fiber. This continued increase in the value of a supports the view ( 5 ) that the mass per unit length continues to increase with increasing ionic strength. Thus while there is general agreement that the mass per unit length at 60 mM ionic strength is equivalent to 6 nucleosomes/ll nm (5, 22), this does not reflect the maximum compaction of the 30-nm fiber. At physiological ionic strengths, the mass per unit length is equivalent to approximately 12 nucleosomes/ll nm ( 5 ) , a value incompatible with a contact helix containing only 6 nucleosomes/turn. The data presented in Figs. 3A, 4.4, and 6 are all compatible with the native fiber being a helical coil with 12 nucleosomes/ turn. Chromatin oligomers containing more than 12 nucleosomes continue to compact at higher ionic strengths with increases in s20,w value that are proportional to the length of the coil. All of these lines intersect at an oligomer size of 12, indicating that this is the minimum number of nucleosomes required to generate one stable turn of the helix. Oligomers containing less than 12 nucleosomes cannot undergo this compaction and are unaffected by increases in ionic strength above 20 mM. In addition, in a plot of log (~~0 ,~) uersus log (ri) for the entire size range of oligomers (Fig. 6B) there is a pronounced break in the line at ri = 12. This decrease in the value of a is consistent with oligomers, containing more than the number of nucleosomes in a turn, folding to form and extend a helical coil or rod. This is particularly evident at low ionic strengths when the coil is very extended. Although this data could also be compatible with a solenoid with 6 nucleosomes/turn if one assumed that 2 turns of the helix were necessary to form a stable unit, the continued increase in the value of the exponent a and the increased mass per unit length values described above render this a less likely possibility. A helical coil with 12 nucleosomes/turn is the simplest model that is compatible with most of the biophysical data obtained from studies carried out at various ionic strengths. Once the fiber is formed, its diameter would be expected to be relatively independent of ionic strength as observed by Williams et al. (lo), whereas the pitch would decrease with increasing ionic strength producing the observed increases in the mass per unit length ( 5 ) and value of the exponent a. The fiber continues to compact in vitro until precipitation occurs supporting the contention that there is no end point to fiber formation (27). Low angle x-ray scattering has been used (10, 28) to try to deduce the pitch of the helix from the meridional banding pattern. Although Widom and Klug (28) interpreted their data in terms of a pitch of 11 nm, more recent studies (10) indicate that the value is somewhat higher (24-27 nm), which is very close to the values derived for a simple helical coil by Fulmer and Bloomfield (9). Quite clearly, it is mandatory to obtain accurate values for the pitch of the 30-nm fiber as a function of ionic strength. Although sedimentation analysis can yield information concerning the folding of the 10-nm polynucleosome chain into the 30-nm fiber and give some indication of overall fiber shape it cannot distinguish between a simple one-start helical coil and the more complex helical ribbon ( 5 ) or cross-linker models (10). However, the data presented in this and the accompanying paper (20) show clearly that a dodecamer is a stable intermediate in fiber folding and it is not immediately obvious how the more complex models accommodate this observation. In summary, there is now a growing body of evidence from both solution techniques (9,20,29,30, this study) and electron microscopy ( 5 ) of chromatin isolated at physiological ionic strengths that the 30-nm fiber is generated by the helical coiling of the 10-nm polynucleosomes chain into a helix with about 12 nucleosomes/turn. The changes in sedimentation behavior as the fiber unfolds are consistent with all the changes in shape of the fiber observed in the electron microscope. In addition, the data confirms that so-called "superbead" profiles are a reflection of intermediate stages in fiber folding (13,20). Furthermore, it appears that higher levels of compaction than are allowed by a contact helix (solenoid) with 6 nucleosomes/turn are achievable at physiological ionic strengths. Finally, although we have referred to the higher order chromatin fiber as the 30-nm fiber it must be recognized that in uiuo the diameter of the fiber may well exceed 40 nm (9,21).
2018-04-03T05:19:39.090Z
1987-09-05T00:00:00.000
{ "year": 1987, "sha1": "184306b617ca305d586eacbc4f75223bc0a83ad5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)45340-9", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e29d32e9ba06cd7d9a33d8208a5eab5755484271", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
220438749
pes2o/s2orc
v3-fos-license
Mycological isolation from animal enclosures and environments in National Wildlife Rescue Centre and National Zoo, Malaysia It is important to provide a baseline of fungal composition in the captive wildlife environment to better understand their role in overall wildlife health. The objectives were to identify species of fungi existing within wildlife animal enclosures and their environment at the National Wildlife Rescue Centre (NWRC) and the National Zoo, Malaysia and to describe their medical and veterinary importance. Samples of air, wall or floor swab, enrichment swab and soil were taken from the animal enclosures, exercise yard and enrichments at NWRC and National Zoo respectively. All samples including those pre-treated samples were plated onto Sabouraud’s Dextrose Agar (SDA). Numerous fungi were grown on all sampling SDA plates regardless by either single or multiple growth. Samples of air in both NWRC and National Zoo had the highest growth of Penicillium spp. with a prevalence of 31.2% and 83.7% respectively. Samples of swab from the wall, floor and enrichments were predominantly by Candida spp. (42.6%) in NWRC and Penicillium spp. (41.6%) in the National Zoo. Prevalence of multiple fungi isolated from the soil samples in NWRC were 57.9% and yeast species was the most common in National Zoo with a prevalence of 88.9%. Overall, 29 and 8 isolates were found in both samples from the NWRC and National Zoo with a predominant species of potential zoonotic fungi have been identified in both premises. The expected fungus Aspergillus spp. was not isolated in all samples in NWRC. Prevalent fungal species found in this study are known to cause disease in animals and humans as primary pathogen and also as opportunistic pathogens that may also cause infection. Thus, health safety precautions should be considered particularly in dealing with conservation of endangered wildlife species, along with personnel and public involvements. There are many factors that help in promoting the growth of fungus namely the level of humidity, temperature, pH, presence of oxygen and absence of light [14,27]. Fungi need water to help them to obtain food, as they release enzymes to breakdown complex materials, water will dissolve these materials thus aiding with absorption [16]. They grow at temperatures within the range of 5°C to 35°C with optimum temperatures for growth between 25°C and 30°C [5]. Fungi in general prefer a neutral pH level as more fungal species grow around the neutral pH level compared to a higher or lower pH level [27]. Fungi can regulate the pH in their environment by secreting acid or alkali [33]. Fungi are best grown in dark places since the presence of light may induce stress for the fungal cells, thus inhibiting them from growing [9]. Oxygen is a critical requirement for all eukaryotic organisms as they help in maintaining the overall cellular metabolism of the fungal cells [10]. Captive wild animal settings or zoos are not just places for a wide range of biodiversity and entertainment, but they also harbor potential emerging infectious diseases [4]. It is estimated that 75% of these diseases are zoonotic, and of these, 70% originate in wildlife populations. Among these, fungal pathogens are emerging at an alarming rate worldwide and pose a significant threat to all wildlife species [6]. According to Sutherland et al. [29], there has been a marked increase in fatal fungal skin infections in wild snakes (Snake Fungal Diseases) since 2006 which caused a decline in the population of wild snakes in the eastern United States [2]. Cases of dermatophytosis in captive tigers have been reported in several countries including Thailand [17] and the United States of America [30]. Cryptococcosis is described in many wildlife species including wild birds [1,19]. More recently, Aspergillus flavus has been isolated from the lung lesion of a Malayan tiger in the National Wildlife Rescue Centre in Malaysia (Kamarudin, Z., 2018, pers. comm.). According to Hedayati et al. [12], A. flavus is the second leading cause of invasive aspergillosis after A. fumigatus and the fungus is a saprotrophic and pathogenic residing in the soil. It is important to mention that some fungal diseases with zoonotic potential do not receive enough attention, which in turn would lead to inadequate preventive measures on a global scale. Fungi are extremely ubiquitous in the environment, therefore isolating and identifying those of veterinary and public health importance of protected wildlife could minimize the increase and spread of fungal diseases within wildlife species [18] by implementing specific preventive measures at the enclosures. Since there was lack of studies in Malaysia on this perspective, this study potentially served as an important baseline reference for fungal species distribution that is present in captive animal enclosures. In addition, any human and veterinary important fungi present in the captive wildlife setting can be identified earlier so that safety measure can be taken to prevent infection in animals and humans. Thus, this study was carried out to identify the type of fungi and to compare the presence of medical and veterinary important fungi in the wildlife enclosures and the environment of selected endangered wildlife species in Malaysia. Air samples, floor and enrichment swabs were taken from the captive enclosures or known as their night stall while samples taken from exercise yards or exhibits were the soil and air samples. The enrichment was referred to the animal enrichment provided by the two venues to the animals to enhance them to explore and interacts with their environment, these include woods, balls, swing, etc. In NWRC, the captive enclosures and the exercise yards of the Malayan tiger (Panthera tigris jacksoni), Malayan sun bear (Helarctos malayanus) and clouded leopard (Neofelis nebulosa) were selected while Malayan tiger, Malayan sun bear and Orangutan (Pongo pygmaeus) enclosures and exhibits were selected in the National Zoo. Collection and preparation of samples A passive air sampling technique (gravitational sedimentation sampling) was employed by placing two Sabouraud's Dextrose Agar (SDA) plates at the animal's resting areas in the enclosures and exercise yards (Fig. 1). We used the short contact time 10-15 min based on Hashimoto and Kawakami [11] and our personal experience screening for the presence of fungal in our establishment (laboratory, offices, staff rooms, others). The amount of time is ample to isolate multiple fungi present in the air by carrying the plates around and also leaving the plates at the level of human heights and to limit the risk exposure to the personnel conducting the study. Sterile cotton swabs were used to take samples from the enrichments and floor/wall surface in the animal enclosures (see Fig. 1). The swabs were then placed inside a transport media before streaking onto SDA plate. About 100 g soil sample from each exercise yard were collected by scraping the top soil using a clean disposable spoon and kept in a clean zip-locked plastic container separately before analysed. Each soil sample collected was mixed thoroughly using a Stuart scientific vortex machine and 10 g were taken and diluted with 10-fold dilution method using sterilized distilled water up until the third dilution. A 10 g of soil was mixed with 90 ml of sterilized distilled water for the dilution. One hundred microlitre (100 µl) of the third dilution was streaked onto SDA plate, thus the detection of fungi was made from the lowest dilution (10 −3 ). The culture plate was then incubated at temperature between 25°C to 28°C in a dark cabinet and was checked for fungal growth on a weekly basis until week 4. The incubation temperature was established by the selected locations for sampling that had the temperature around 24°C to 32°C as the areas are surrounded by thick tall trees, hence the culture temperature was selected to mimic the condition present at the sampling areas. Identification of fungus The incubated plates were examined for fungal growth starting on Day 3 post inoculation (PI) and on a weekly basis. The examination involved two stages: macroscopic and microscopic examination [7,20]. Macroscopic examination consists of description of consistency, ridges and grooves, as well as the color or pigmentation of the colony morphology observed on top and reverse side of SDA plate. Microscopic examination involved the observation of fungal structures such as conidia, conidiophore, hyphae and the presence of other unique characteristics of the species such as chlamydoconidia or macroconidia, wet-mount preparations was used to visualize the fungal structures. Briefly, a clear or colorless clean cellophane tape was touched onto the surface of mycelia and placed onto a clean glass slide and stained with lactophenol cotton blue (LCB). Candida spp. was identified using the available commercial identification kit API 20 C AUX (bioMérieux, Durham, NC, USA) only for isolation from the NWRC. Descriptive statistics on the percentages of fungal isolation were performed for each type of sample in each premise. RESULTS A total of air samples (n=57; n=25), swab samples (n=38, n=25) and soil (n=10, n=6) were collected from the NWRC and National Zoo, respectively (details in Table 1). Fungi were grown on all sampling SDA plates regardless by either single or multiple growth in both NWRC and National Zoo. In NWRC, 58.6% of fungi were isolated from the animal enclosure and 86.2% of the exercise yard with 86.2%, 36% and 39.9% being isolated from the air samples, swabs and soil respectively. Multiple fungi were isolated from the soil samples in NWRC with a prevalence of 57.9%. Penicillium spp. and Candida guillermondii were the most prevalent in NWRC with a prevalence of 76.2 and 70.5%, respectively. Penicillium spp. were mostly isolated from the air samples and enrichment swabs in the enclosure with the average prevalence of 31.2% (the average % prevalence of both air samples from the enclosures and exercise yards) and 30%, respectively, while they represented only 3.7% of each isolate from the swabs of the enclosure and soil samples. Samples of swab from the wall, floor and enrichments were predominantly represented by Candida spp. (42.6%) in NWRC, especially C. guillermondii (45.3%) and C. tropicalis (40%). Another 27 isolates were identified in all samples in NWRC with a prevalence range from 0.9 to 49.5% (see detail in Table 2). In National Zoo, fungi were isolated from all night quarters and exhibit samples with the prevalence from the air, swabs of the enrichment and soil of 83.7, 66.7 and 25% respectively. Penicillium spp., yeast species and Trichophyton spp. were predominant with a prevalence of 60.6, 35.2 and 27.3% respectively. Penicillium spp. were mostly isolated from the air samples with a prevalence of 83.7%, while they were also isolated from 41.6 and 33.3% of the swabs of the enrichment and soil samples, respectively. Yeast species was the most commonly isolated in soil samples with a prevalence of 88.9%, while Trichophyton spp. were predominant in air samples with 39.5%. Another 5 isolates were identified in samples in National Zoo with prevalence ranging from 1.2 to 11.6%, including isolation of Aspergillus spp. (see detail in Table 3). DISCUSSION This study was conducted to isolate fungi species that are present in the NWRC and National Zoo wildlife enclosure and environment and to identify other medical and veterinary importance fungus. Because of the recent finding of A. flavus in the NWRC, this species was expected to be found, surprisingly, A. flavus was not isolated in the present study. In contrast, other Aspergillus species, namely A. fumigatus and A. niger, were detected at a prevalence of 1.9% and 0.9%, respectively. In the National Zoo, the detection of Aspergillus spp. occurred in 5.7% of the collected samples. Fungi can easily infect animals and humans as they are ubiquitous, often by inhalation and penetration through un-intact skin. It is reported that previously, A. flavus had been isolated from the pulmonary lesions of an incidental finding in the necropsy of a tiger in NWRC, indicating that this fungus might be present in the wildlife enclosures and the environment. Aspergillus flavus produces aflatoxin, a very powerful hepatocarcinogenic mycotoxin and are known to cause human and animal infection [12], being more commonly found in the air compared to A. fumigatus. Aspergillus is a saprophytic mold that is closely associated with agriculture and other human activities that make nutrients available to fungi. Aspergillosis is not contagious, nevertheless when the human is immunocompromised, Aspergillus can cause rapid developing acute infection following environmental exposure. Chronic forms of aspergillosis causing respiratory tract infections in wild birds have been reported since at least back in the year 1813 [8]. In general, molds reported in this study are easily transmitted through inhalation of spore-containing air. According to the result of this study (Tables 2 and 3), the most prevalent fungus isolated from both NWRC and National Zoo wildlife enclosure (floor, wall and enrichment swab) and environment (air and soil) is Penicillium spp. In this study, Penicillium spp. is the most abundant fungus present in the environment, it is mostly present on the air sample cultured plates from both inside animal enclosure and the exercise yard of the animal. This is most probably due to the spore dispersal carried by the wind into the animal enclosure from the exercise yard (outdoor). The air sample was taken by placing the plate on the ground and Penicillium spp. are known to be very abundant in soil. Thus, this can explain the higher number of spores present from the air sample of an exercise yard as the plates are closer to the soil. The factors that might affect the prevalence of fungus species include the ease of spore dispersal as spore can be widely dispersed especially by the wind intensity. In addition, animals can also be the mechanical vectors that help spreading the spores. Penicillium spp. are ubiquitous soil fungi and usually regarded as unimportant in terms of pathogenicity in human and animals, for most of its species. One of the Penicillium spp., P. marneffei, however, is known to commonly infect immunocompromised individuals [25]. It was only recognized as important when the human immunodeficiency virus (HIV) pandemic occurred in Asia [32] and untreated cases are usually fatal. Infection by Penicillium spp. are rare in domestic animals, however animals such as cat have been reported to get infected with Penicillium spp. [28]. In addition, the organism's natural habitat is in soil endemic to Southern China and South-East Asia [3,32]. Penicillium is globally recognized as the organism responsible for the production of the Penicillin antibiotic, but little is known that one of its species, P. marneffei is an emerging Table 3. Fungal isolates from the animal enclosure (enrichment swab) and environments (air and soil) in the National Zoo (1) 100 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (1) 100 Malayan sun bear Night quarters (air), n=4 (2) 50.0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 Exhibit (air), n=2 (2) 100 [32]. Conidiobolus coronatus has been isolated markedly in National Zoo while a small proportion detected in NWRC. This fungus has a worldwide distribution but more often seen in tropical rainforests. It is commonly found in soil and decaying leaves and possess a distinctive morphological feature for identification-a short conidiophore that bears conidia that produces hair-like appendages known as villae. Since the mold is powdery, the infection may result from inhalation of the spores hence lesions found in human and animals usually concentrate at the rhino-facial area. Conidiobolus coronatus is known to cause entomophthoromycosis in human, that is usually characterized by nasal cavity tumor due to the aggressive invasiveness of the fungus [31]. Paecilomyces spp. are also found worldwide and has always been regarded as contaminants of the air, but for an immunocompromised host, it may lead to fatal events [34]. Fusarium spp. are widely distributed in soil where they are commonly considered as a contaminant. In humans, this fungus is commonly reported to cause infection in both immunocompetent and immune compromised hosts [15]. It can cause superficial infection as well as invasive and disseminated infection. According to Jain et al. [15], about 15 Fusarium spp. have been identified to cause infections in animals and humans. Example of superficial infection by Fusarium oxysporum is onychomycosis in humans while in animals, Fusarium spp. infection is uncommon. Candida guilliermondii was the second most prevalent fungus identified from NWRC and markedly isolated from the swab samples from the wall, floor and enrichment within the enclosure. This fungus is part of the human skin and mucosal normal flora, and this fungus appeared to be the least virulent amongst Candida species [22] where the author classified Candida species into 3 virulence group of decreasing pathogenicity and C. guilliermondii placed on the third group with the least virulence species. The assumption of high prevalence of this fungus would be the circulation and contact with personnel with the animal and the environment is higher in the NWRC premises. As it is a normal inhabitant of human, the yeast becomes opportunistic pathogen when immunological mechanisms are disturbed and the proliferation of the fungus is higher than normal thus leading to disease formation [21]. In animal, infection with the yeast species has been reported in dogs, in which the normal protective barrier was disturbed [26]. Wildlife populations worldwide are under increasing threat from a variety of processes, ranging from climate change to habitat loss that can lead to a physiological stress response [13]. This has become a concern because captive wildlife is more prone to stress due to their unnatural environments. When stressors act for a prolonged time, or when effects accumulate, it is harmful to the animal especially because these environmental fungi can cause systemic mycoses and can be fatal. Zoos are places where all walks of life visit all year round. Hence, it is important to ensure that proper awareness is displayed at the ticket counter and provide the vulnerable visitors such as unhealthy individuals, children or elderly with basic PPE (e.g. facial mask) prior to entry in addition to hand sanitizer at the exits. Zoo personnel (e.g. keepers, veterinarians, curators and biologists) must also be continually reminded that all husbandry practices should be based on principles which minimize stress to the wildlife. Even though it is known that fungi are presented ubiquitously in the environment, most of the fungus may or may not be causing disease in humans and animals. Animals may be carriers of certain agents including fungi, therefore if they are translocated from one place to another, it may seed their new environment with agents. Other than being presence in the environment, they need other factors such as presence in extremely abundant amount to eventually cause disease. As for the fungi species found in both NWRC and National Zoo animal enclosure and environment in this study, this preliminary result should consider the animal and public health disease prevention especially to immunocompromised animals and humans. There may not be many fungus species under a genus that may cause disease, therefore it is important to identify fungus up to the species level in order to know appropriate and practical action to be taken when a pathogenic fungal species is present in the environment. Molecular technique is seen as the best technique to be used for fungal identification up to the species level. Besides that, active air sampling technique can also be implanted for air sampling rather than passive air sampling. This technique can help to capture a fungus species that is presence at a very low amount in the environment. In conclusion, prevalent fungal species found in this study are known to cause disease in animals and humans as primary pathogens and also as opportunistic pathogens that may also cause infection, therefore health safety precautions should be emphasized by the management.
2020-07-09T09:12:37.988Z
2020-07-09T00:00:00.000
{ "year": 2020, "sha1": "20e816c5d41e46268b22db7738723b2e860f9fa8", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/82/8/82_20-0229/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b1e0356ec71f8cd5ec7a2240f5c9c4777a0e6849", "s2fieldsofstudy": [ "Environmental Science", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250381172
pes2o/s2orc
v3-fos-license
Verapamil and Its Role in Diabetes : Autoimmune pancreatic β -cell loss and destruction play a key role in the pathogenesis and development of type 1 diabetes, with a prospective increased risk for developing micro- and macrovascular complications. In this regard, orally administrated verapamil, a calcium channel antagonist, usually intended for use as an anti-arrhythmic drug, has previously shown potential beneficial effects on β -cell preservation in new-onset type 1 diabetes. Furthermore, observational data suggest a reduced risk of type 2 diabetes development. The underlying pathophysiological mechanisms are not well investigated and remain widely inconclusive. The aim of this narrative review was to detail the role of verapamil in promoting endogenous β -cell function, potentially eligible for early treatment in type 1 diabetes, and to summarize existing evidence on its effect on glycemia in individuals with type 2 diabetes. Introduction Approximately 537 million people globally suffer from type 1 (T1D) and type 2 diabetes mellitus (T2D) and prevalence of both is substantially increasing [1,2]. Without sufficient action to address this situation, the number of people suffering from diabetes is predicted to be 643 million in 2030 [2]. The key factor in developing T1D and advanced T2D is the loss or impairment of the insulin-secreting β-cells of the pancreas. In the last 100 years daily insulin injections have been established as the life-saving treatment for most people with T1D and some with T2D. Nevertheless, despite emerging advancements in insulin development and diabetes technology, the majority of people living with diabetes do not achieve individual therapy goals, increasing their risk of acute and late complications [3]. Pancreatic β-cell loss and destruction play a key role in the pathogenesis and development of T1D. In the pancreatic tissue, islets of Langerhans secrete several different hormones, which are responsible for maintenance of glucose homeostasis. Insulin, the only hormone able to lower blood glucose concentration, is secreted by the β-cells, which represent the major cellular component of the pancreatic isles [4]. The primary physiological stimulus for insulin secretion is known to be the increase of circulating glucose concentration. The direct insulin secretion by glucose involves a "triggering" and an "amplifying" pathway. The "triggering" pathway is activated by several biochemical signals, involving the adenosine triphosphate (ATP) generation by glucose metabolism, the closure of ATPsensitive potassium (K ATP ) channels resulting in membrane depolarization and consequent activation of voltage-gated calcium channels. The subsequent sharp rise of intracellular calcium levels contributes to the triggered exocytosis of readily releasable pooled insulin Diabetology 2022, 3 394 secretory granules by membrane fusion and release to the cell exterior. After the "first phase" of insulin release resulting in a sharp peak, the amplifying pathway provides lower but sustained insulin release for several hours in the "second phase" of insulin secretion. The amplifying pathway is activated in the presence of maximal intracellular Ca 2+ levels and is largely independent of K ATP driven mechanisms [5]. Residual c-peptide levels representing a consistent and sensitive measure of β-cell function [6] and being detected in many people for years following the diagnosis of T1D, contribute to β-cell responsiveness to hyperglycemia and α-cell responsiveness by reciprocal regulation of glucagon secretion to hypoglycemia for glycemic control in individuals with T1D [7]. Carr et al. demonstrated that detectable c-peptide is associated with an increased time spent in the normal glucose range and with less hyperglycemic episodes, but not with the risk of hypoglycemia in those with newly diagnosed T1D [8]. Preserved c-peptide levels in T1D were associated with a more pronounced counter regulation in response to clamp-induced hypoglycemia [9]. On the one hand, regular physical exercise contributes to β-cell preservation, improved insulin sensitivity and less requirements of exogenous insulin administration [10]; on the other hand, in T1D subjects undertaking high levels of physical exercise, the honeymoon period, which is defined by an absence of insulin requirements early after onset of diabetes, is five times longer compared to matched sedentary controls [11]. Next to physical activity, conscious macronutrient intake, such as gluten deprivation or reduced consumption of refined grains may have beneficial effects on β-cell preservation in people affected by new-onset T1D [12,13]. Individuals with T1D are exposed to an increased risk for developing micro-and macrovascular complications, which are associated with episodes of dysglycemia. In this context, residual β-cell secretion, evaluated by measuring fasting c-peptide levels, has been shown to be prospectively associated with reduced incidence of microvascular complications in T1D [14]. Even modestly detectable β-cell levels correlated with a reduced incidence of diabetes-related complications, such as retinopathy and nephropathy [15]. By the fact that autoimmune-mediated β-cell destruction is unavoidably progressing, sooner or later, complex insulin therapy is required for the lifetime. The time from T1D diagnosis to complete lack of measurable insulin (c-peptide) is highly individual as shown by Davies et al., who demonstrated that after 6-9 years of diabetes diagnosis, insulin remained detectable in 60% of individuals, while after 10-20 years of diabetes duration only 35% of the individuals remained c-peptide positive, as defined by detectable fasting c-peptide ≥ 0.017 nmol/L and non-fasting c-peptide ≥ 0.2 nmol/L [16]. Recent research on T1D enables us to refine our understanding in pathogenesis and subsequent development of insulin deficiency in T1D and potentially establish novel prevention and therapy strategies [17]. The impairment of β-cells leads to long-term immune-mediated destruction, low insulin secretory capacity and autoantigen presentation [17]. However, up to now, evidence on effective therapies to delay or halt this process is largely lacking [18]. For these reasons, β-cell rescue and preservation strategies are hot topics on current and future therapeutic strategies in T1D. Exploring beneficial actions for the treatment of T1D, clinical data are suggesting positive effects for peptides or medication reducing the β-cell stress, such as verapamil [17,19,20]. Verapamil was the first non-dihydropyridine calcium channel blocker (CCB) that was approved by the Food and Drug Administration (FDA) in 1981 for clinical use [21]. In several clinical implications, such as cardiac arrhythmias or combination treatment of hypertension, it has proven efficacy in everyday clinical practice due to its good safety profile and pharmacodynamic properties [22,23]. Next to the previously described impact of preserved endogenous insulin secretion, measured by c-peptide levels in T1D individuals, the role of c-peptide is not well defined in T2D, a disease that is considered to be associated with insulin resistance and a reduced β-cell function [24]. Therefore, preserving β-cells function is one of the principle aims in the treatment of T2D to delay the natural course of the disease, necessitating the introduction of insulin therapy in the majority of patients [14]. In T2D patients, regular moderate physical activity and physical health represent well accepted key factors next to regular orally administrated antidiabetic medication and finally additional exogenous insulin application combined with conscious macronutrient intake in order to prospectively hold back the progress of β-cell decline and insulin resistance. In this matter, several clinical observational studies in general reported about decreased risk of new onset diabetes and lower fasting blood glucose levels in diabetes patients receiving orally administrated verapamil [25][26][27]. Dietary factors are estimated to contribute to maintaining insulin secretion and sensitivity by reduced consumption of refined grains and meat products in T2D [13]. Some prospective studies reported a positive association between residual insulin secretion in T2D patients and less microvascular complications [28], but up to now, to our knowledge, no data regarding the association between residual insulin secretion and major outcomes, such as all-cause mortality and mortality due to cardiovascular diseases, are available in T2D patients [29]. In this regard, we review the role of orally administrated verapamil in diabetes positively influencing β-cell function and glycemic control as well as its potential properties to prevent diabetes development. Method Section Scientific Research We selected relevant scientific research published from October 1984 until May 2022 by searching PubMed. Potentially eligible studies were considered to be included in our narrative review after searching by combined-term medial subject headings and keywords, such as type 1 diabetes (T1D), type 2 diabetes (T2D), insulin secretion, β-cell preservation, verapamil, and Thioredoxin-interacting protein (TXNIP). After completing the search, 69 papers and one web source were included to detail the systemic and cellular effects of orally administrated CCB verapamil in T1D and T2D subjects. T1D is an autoimmune-mediated disease characterized by progressive destruction of the pancreatic β-cells resulting in long term lack of the hormone insulin [30]. Pancreatic β-cells play a pivotal role in the synthetization and secretion of insulin, as the body's solo source [31]. In this regard, insulin represents the main player for promotion and maintenance of metabolism [32]. In the scientific community it is an accepted fact that diabetes is associated with a reduction in β-cell mass and to date there is no approved drug treatment that targets damage to these cells [33,34]. Pancreatic β-cells have, as reported in several studies, a weak antioxidant capacity and are very sensitive to oxidative stress interactions occurring within the cells [31,33]. Although several trials have studied the mechanisms of β-cell loss in the different types of diabetes, there is less information referring to the residual β-cells in autoimmune T1D [35]. Insulin Secretion in Different mechanisms are postulated for β-cell failure as demonstrated especially for T2D individuals [36]. β-cells in T2D patients are reported to be secretory-functionally inactive for decades and their potential and preserving might contribute to new therapeutical approaches [35]. A heated discussion is ongoing whether a β-cell reduction occurs in every person with T2D. Some researchers argue that the β-cell mass in T2D patients stays normal and remarks the functional abnormality of insulin secretion as the main problem of hyperglycemia. On the other hand, the scientific community discusses the reduction of the absolute β-cell mass, which is far more difficult to restore [34]. In this context, oxidative stress might inactivate key islet transcription factors, producing "stunned" β-cells, not responding to glucose [36][37][38]. Although it is difficult to measure the β-cell mass in vivo, there is a proposed positive correlation between high mass and high insulin sensitivity and secretion [34,39,40]. Thioredoxin-Interacting Protein (TXNIP) and Its Regulation of Pancreatic β-Cells Thioredoxin-interacting protein (TXNIP) is an attractive aspect to focus on, as it has been suggested to be a major factor in the regulation of pancreatic β-cell dysfunction and death, representing key processes in the pathogenesis of T1D and T2D [19,41]. Therefore, TXNIP represents a very promising future target in the therapy for diabetes based on basic, preclinical and retrospective epidemiological analyses [41,42]. TXNIP inhibits thioredoxin (TRX) as a part of the intracellular antioxidant system, which manages different mechanisms in the β-cells, mainly the reduction of the antioxidant capacity and subsequent oxidative stress and apoptosis in the β-cells, resulting in reduced insulin production capacity ( Figure 1) [43,44]. there is a proposed positive correlation between high mass and high insulin sensitivity and secretion [34,39,40]. Thioredoxin-Interacting Protein (TXNIP) and Its Regulation of Pancreatic β-Cells Thioredoxin-interacting protein (TXNIP) is an attractive aspect to focus on, as it has been suggested to be a major factor in the regulation of pancreatic β-cell dysfunction and death, representing key processes in the pathogenesis of T1D and T2D [19,41]. Therefore, TXNIP represents a very promising future target in the therapy for diabetes based on basic, preclinical and retrospective epidemiological analyses [41,42]. TXNIP inhibits thioredoxin (TRX) as a part of the intracellular antioxidant system, which manages different mechanisms in the β-cells, mainly the reduction of the antioxidant capacity and subsequent oxidative stress and apoptosis in the ß-cells, resulting in reduced insulin production capacity ( Figure 1) [43,44]. In detail, TXNIP regulates the glucose homeostasis as a signal complex, the TRX/TXNIP signal complex. This redoxisome represents the basis of TXNIP regulation as redox response. TXNIP has been shown to bind NOD-like receptor protein 3 (NLRP3) and activate the inflammasome [41]. TXNIP as a member of the ancestral α-Arrestin family binds to the Itchy E3 Ubiquitin Protein Ligase (ITCH) and enables the ubiquitination of the substrates. TXNIP in general is transcriptionally regulated by nuclear receptors (NR), such as glucocorticoid receptor (GR), vitamin D receptor (VDR), farnesoid X receptor (FXR) and peroxisome-proliferator activated receptor (PPARs) in a cell-specific manner [41]. These signal complex regulators are involved in the physiological regulation functions of TXNIP, for example in the regulation of glucose homeostasis, as pictured in Figure 2. In detail, TXNIP regulates the glucose homeostasis as a signal complex, the TRX/TXNIP signal complex. This redoxisome represents the basis of TXNIP regulation as redox response. TXNIP has been shown to bind NOD-like receptor protein 3 (NLRP3) and activate the inflammasome [41]. TXNIP as a member of the ancestral α-Arrestin family binds to the Itchy E3 Ubiquitin Protein Ligase (ITCH) and enables the ubiquitination of the substrates. TXNIP in general is transcriptionally regulated by nuclear receptors (NR), such as glucocorticoid receptor (GR), vitamin D receptor (VDR), farnesoid X receptor (FXR) and peroxisome-proliferator activated receptor (PPARs) in a cell-specific manner [41]. These signal complex regulators are involved in the physiological regulation functions of TXNIP, for example in the regulation of glucose homeostasis, as pictured in Figure 2. TXNIP has been shown to be activated by hyperglycemia and to be increased in diabetes, whereas TXNIP deletion seems to be associated with non-diabetes occurrence in general. In detail, TXNIP is one of the genes that is highly upregulated by hyperglycemia in murine and human β-cells. Therefore, in the case of β-cells the glucose sensor carbohydrateresponse element-binding protein (ChREBP) directly binds to the promoter region of TXNIP and increases gene expression [41,45,46]. Furthermore, TXNIP inhibition has been proven for promoting insulin production and glucagon-like peptide 1 signaling via the microRBA regulation [42]. On the other hand, the glucose responsiveness of TXNIP is linked to the notable functions of induction of apoptosis as a reaction to hyperglycemic episodes [41,45,47]. This stress-induced upregulation of TXNIP can be noticed in the pancreatic islets during progression of diabetes in humans and mice [48,49]. Varying factors, such as glucocorticoids, lipids, inflammation/cytokines and oxidative stress, which influence and stimulate the TXNIP induction, are described in previous research [50][51][52][53]. , FOR PEER REVIEW 5 Figure 2. TXNIP signal complex regulating glucose homeostasis, [41]. TXNIP has been shown to be activated by hyperglycemia and to be increased in diabetes, whereas TXNIP deletion seems to be associated with non-diabetes occurrence in general. In detail, TXNIP is one of the genes that is highly upregulated by hyperglycemia in murine and human ß-cells. Therefore, in the case of ß-cells the glucose sensor carbohydrate-response element-binding protein (ChREBP) directly binds to the promoter region of TXNIP and increases gene expression [41,45,46]. Furthermore, TXNIP inhibition has been proven for promoting insulin production and glucagon-like peptide 1 signaling via the microRBA regulation [42]. On the other hand, the glucose responsiveness of TXNIP is linked to the notable functions of induction of apoptosis as a reaction to hyperglycemic episodes [41,45,47]. This stress-induced upregulation of TXNIP can be noticed in the pancreatic islets during progression of diabetes in humans and mice [48,49]. Varying factors, such as glucocorticoids, lipids, inflammation/cytokines and oxidative stress, which influence and stimulate the TXNIP induction, are described in previous research [50][51][52][53]. In this context, there has been shown some scientific evidence that pancreatic ß-cells as well as skeletal myocytes share common mechanisms of fuel sensing in order to cooperate and maintain glucose homeostasis in the whole-body system. Therefore, TXNIP has been recently shown to play a key role as a diabetogenic culprit disrupting the following both processes-on the one hand by activating the pancreatic isles by mobilizing insulincontaining vesicles and on the other hand by modulating the translocation of resident glucose transporters in the peripheries of the muscles [48,54]. The thioredoxin system plays an important role at a nodal point linking pathways of redox regulation, energy metabolism, antioxidant defense, and in the end cell growth and survival [44]. Hypoglycemic agents, carbohydrate-response-element-binding protein and cytosolic calcium levels regulate the β-cell TXNIP expression, and these different aspects contribute to regulation of whole-body glucose maintenance [42]. This vicious cycle may contribute to TXNIP In this context, there has been shown some scientific evidence that pancreatic β-cells as well as skeletal myocytes share common mechanisms of fuel sensing in order to cooperate and maintain glucose homeostasis in the whole-body system. Therefore, TXNIP has been recently shown to play a key role as a diabetogenic culprit disrupting the following both processes-on the one hand by activating the pancreatic isles by mobilizing insulincontaining vesicles and on the other hand by modulating the translocation of resident glucose transporters in the peripheries of the muscles [48,54]. The thioredoxin system plays an important role at a nodal point linking pathways of redox regulation, energy metabolism, antioxidant defense, and in the end cell growth and survival [44]. Hypoglycemic agents, carbohydrate-response-element-binding protein and cytosolic calcium levels regulate the β-cell TXNIP expression, and these different aspects contribute to regulation of whole-body glucose maintenance [42]. This vicious cycle may contribute to TXNIP triggered β-cell failure and overt diabetes [44]. Next to the mentioned TXNIP interactions, TXNIP is an α-Arrestin that acts as an adaptor for glucose transporter 1 (GLUT1), which plays upregulated-as a major glucose facilitator-an important role in the development of metabolic diseases, such as diabetes. TXNIP interacts with GLUT1 lipid nanodiscs in a 1:1 ratio and regulates the glucose uptake in response to intracellular as well as extracellular signals. TXNIP-GLUT1 interaction depends on TXNIP interaction with phosphatidylinositol 4,5-bisphosphate PI(4,5)P2 or PIP2 and TIXNP does not interact with GLUT5 [55]. In summary, the TRX/TXNIP signal complex has been shown to play an important role in the redox-related signal transduction in many different types of cells in various tissues. Additionally, TXNIP has several cellular functions, which largely rely on their scaffolding function as a member of the α-Arrestin family [41]. By both functions, i.e., the redox dependent and independent, TXNIP has emerged as master regulator for glucose homeostasis. Targeting TXNIP in diabetes seems to play an important role in the whole-body glucose metabolism regulation influenced by variable factors and circumstances and in the future might inaugurate new therapeutical potential in diabetes therapy. Verapamil and Its Impact on Diabetes The non-dihydropyridine CCB verapamil and its role in clinical routine as cardiac antiarrhythmic therapy and a blood-pressure-lowering drug was approved by the FDA in 1981 due to its advantageous pharmacodynamics for the treatment of angina, hypertension, supraventricular tachycardia and atrial fibrillation [21,22]. In recent years it has been considered as a promising novel approach in the therapy of TD1 and T2D [21]. The cardiac side effects and antidiabetic efficacy of R-form verapamil enantiomer (R-Vera) and S-form verapamil enantiomer (S-Vera) were evaluated in mouse models and R-Vera seems to represent an effective option in diabetes treatment by downregulating TXNIP and reducing β-cell apoptosis with an established safety profile and only weak adverse cardiac effects, such as negative inotropy [21]. While the rise of intracellular calcium concentration is known in general as the main trigger of exocytosis and subsequent insulin secretion, verapamil reduces by blocking calcium channels the intracellular calcium concentration and prevents long-term β-cell impairment, which is partly caused by chronic increased intracellular Ca 2+ levels. This preventive mechanism contributes to preserved β-cell function by TXNIP downregulation, ameliorating less apoptosis in pancreatic β-cells and helping to preserve continuously endogenous insulin levels during glucose metabolism regulation [21]. In general, the three different subtypes of calcium channels, i.e., Ca V 3.1, -3.2 and -3.3, are distributed over the whole body and have defined roles in cardiac regulation, vasculature tone regulation and the activation of the nervous system. The main effect of verapamil results in blocking of both L-Type and T-type channels with higher affinity for depolarized channels than for resting channels. The highest affinity, up to ten times higher, of verapamil is reported in depolarized L-type channels than in the resting channels [22]. The phenylalkylamine Br-verapamil binds in the central cavity of the pore on the intracellular side of the selectivity filter-blocking the ion-conducting pathway-and structure-based mutations of key amino-acid residues confirm the verapamil binding on both sides, as reported by Tang et al. [56]. These specific positive effects could be verified in several studies, as shown in mouse models resulting in improved β-cell survival and function, enhanced insulin secretion and reduced diabetes rate [19]. Next to these findings, several clinical observational studies, such as International Verapamil SR/Trandolapril (INVEST) and the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study, confirmed decreased risk for newly diagnosed diabetes and lower fasting blood glucose levels in response to regular oral verapamil intake [25][26][27]. Verapamil Administration and β-Cell Mass in Mouse Model In this regard TXNIP was identified as a target to halt the functional β-cell mass loss as described by Xu et al. in a mouse model [19]. Hyperglycemia and diabetes induce an upregulation of β-cell TXNIP expression, and TXNIP overexpression causes β-cell apoptosis. Although it has previously been shown that TXNIP is strongly dependent and induced by glucose, different proinflammatory cytokines, such as tumor necrosis factor α (TNF α), interleukin-1 β (IL-1 β) and interferon γ (IFNγ) each have distinct and partly opposing mechanisms and pathways on β-cell TXNIP expression [19,50]. Xu et al. could reveal positive effects due to inhibition of TXNIP expression, enhanced endogenous insulin levels as well as improved glucose homeostasis and sensitivity in the mouse model. These positive effects of TXNIP repression by orally administrated verapamil in a mouse model seem to be conditional by reduction of intracellular calcium levels, inhibition of calcineurin signaling and nuclear exclusion and decreased binding of carbohydrate response element-binding protein to the E-box repeat in the TXNIP promoter [19]. For the first time it was highlighted that oral medication of the CCB verapamil could effectively inhibit proapoptotic β-cell TXNIP expression, improve β-cell survival and function with weak adverse cardiac effects, and could represent a new therapeutic approach for the prevention and therapy of diabetes. Clinical Implications in Type 1 Diabetes (T1D) To translate these positive findings reported in a mouse model [19,50] into humans, Ovalle et al. performed a trial in order to assess the efficacy and safety of using oral verapamil in subjects with recent onset T1D in order to downregulate TXNIP and enhance the patients' endogenous β-cells mass and insulin production [18]. Therefore, in a doubleblind, placebo-controlled Phase 2 trial, 32 participants were randomized to assess the efficacy and safety of orally administrated verapamil in subjects with recent onset T1D in order to downregulate TXNIP, and to evaluate the maintenance of endogenous β-cell mass and insulin production. Furthermore, 26 participants were randomized to the two treatment groups, i.e., placebo control group versus oral medication with verapamil for 12 months. The initial dose of verapamil was 120 mg daily and was advanced to a maximum dose of 360 mg daily, if tolerated. The primary outcome measures assessed the functional β-cell mass by the area under the curve (AUC) from a two-hour mixed meal stimulated c-peptide after 12 months. As secondary outcome measures the changes from baseline in exogenous insulin requirements both within 12 weeks and 12 months, hypoglycemic events as well as the HbA1c values within 12 weeks and 12 months were defined. An improved endogenous β-cell activity, lower exogenous insulin requirements and lower hypoglycemic episodes were demonstrated in the verapamil group for at least 24 months and lost upon discontinuation [18,57]. These positive findings were shown and consistent to the previous results in preclinical diabetic mouse model studies and in isolated human islets [18,50]. Evaluating secondary endpoints, as the total daily dose of insulin (TDDI) to maintain glycemic control, a significant treatment difference of −43% in the verapamil group compared to the placebo group could be revealed within the first follow-up year, as well as non-significant lower HbA1c levels (p = 0.083) in the verapamil group. Moreover, an improved glycemic control with significant less hypoglycemic episodes in the verapamil group (p = 0.0387) as well as more time within the target range of 3.9-10.0 mmol/L assessed by a continuous glucose monitoring (CGM) system were reported within the verapamil group. Verapamil treatment did not affect fastening glucagon levels and no severe adverse events occurred in the verapamil group causing treatment discontinuation. These positive effects, especially the comparable glucagon levels in both groups, might be assumed by an improved insulin sensitivity due to verapamil administration resulting in an overall better glucose control. These aspects might contribute to lower exogenous insulin requirements, which in turn could reduce the hypoglycemic episodes [18]. These different mechanisms might result in an overall improved glucose control and stable glucagon levels in both groups might serve as a positive feedback control mechanism. Importantly, verapamil administration did not cause any severe episodes of hypotension, heart rate abnormalities or electrocardiogram (ECG) alterations. These results emphasize the potential translational implications and its impact on clinical care and encourage the scientific community for larger follow-up trials in order to develop novel therapeutical approaches [18,19]. The clinical implications of TXNIP targeting in T1D subjects seem to preserve additional therapeutic opportunities to decrease long term micro-and macrovascular complications, such as diabetic vascular dysfunction, diabetic retinopathy as well as diabetic nephropathy, and to decrease the rate of diabetes-related morbidity and mortality [58,59]. These positive effects are based on the established mode of verapamil, i.e., the blockade of L-type calcium channels resulting in a decrease of intracellular calcium level followed by an inhibition of TXNIP transcription [19]. In this context, tissue with a high expression level of L-type calcium channels, as the heart or the β-cells, consequently benefits from the TXNIP inhibition and positive effects have been shown in diabetic heart disease [60,61]. A recently published exploratory study of Xu et al. assessed the potential systemic changes in response to verapamil treatment by global proteomics analyzed by using liquid chromatography-tandem mass spectrometry (LC-MS) and revealed positive systemic and cellular effects of orally administrated verapamil in TD1 subjects [57]. The initial trial was registered by clincicaltrials.gov (NCT02372253 2/20/2015) and previous research was published by Ovalle et al. 2018 [18]. The participants were randomized to oral verapamil (360 mg sustained-release daily) or placebo. In this study, focusing on continuous use of verapamil, several positive effects, such as delayed T1D progression, promotion of endogenous β-cell function and consecutive lowered insulin requirements by continuous verapamil application were revealed. These positive effects were sustained for at least two years by regular application and were lost upon discontinuation. No further follow up after two years was performed in this exploratory trial. Therefore, the current studies point out crucial mechanistic and clinically beneficial effects of administrated verapamil in T1D patients [57]. These positive effects of orally administrated verapamil might be assumed by TXNIP inhibition causing β-cell protective and anti-diabetic effects. Analyzing chromogranin A (CHGA) serum levels as a potential therapeutic marker before and after treatment revealed a positive correlation with loss of β-cell function, reflected changes in verapamil treatment and discontinuation and persisted over a follow up time of at least two years. In summary, the results of this exploratory study suggested that a continuous orally administrated verapamil treatment in T1D individuals may lower insulin requirements and decelerate disease progression for at least two years after diagnosis. These positive effects are associated with normalization of CHGA levels, and anti-oxidative effects, and immunomodulatory gene expression profile in pancreatic isles. The complex interaction is pictured in Figure 3. [57]. All these changes might contribute to the overall beneficial effects of verapamil use. changes in response to verapamil treatment by global proteomics analyzed by using liquid chromatography-tandem mass spectrometry (LC-MS) and revealed positive systemic and cellular effects of orally administrated verapamil in TD1 subjects [57]. The initial trial was registered by clincicaltrials.gov (NCT02372253 2/20/2015) and previous research was published by Ovalle et al. 2018 [18]. The participants were randomized to oral verapamil (360 mg sustained-release daily) or placebo. In this study, focusing on continuous use of verapamil, several positive effects, such as delayed T1D progression, promotion of endogenous β-cell function and consecutive lowered insulin requirements by continuous verapamil application were revealed. These positive effects were sustained for at least two years by regular application and were lost upon discontinuation. No further follow up after two years was performed in this exploratory trial. Therefore, the current studies point out crucial mechanistic and clinically beneficial effects of administrated verapamil in T1D patients [57]. These positive effects of orally administrated verapamil might be assumed by TXNIP inhibition causing β-cell protective and anti-diabetic effects. Analyzing chromogranin A (CHGA) serum levels as a potential therapeutic marker before and after treatment revealed a positive correlation with loss of β-cell function, reflected changes in verapamil treatment and discontinuation and persisted over a follow up time of at least two years. In summary, the results of this exploratory study suggested that a continuous orally administrated verapamil treatment in T1D individuals may lower insulin requirements and decelerate disease progression for at least two years after diagnosis. These positive effects are associated with normalization of CHGA levels, and anti-oxidative effects, and immunomodulatory gene expression profile in pancreatic isles. The complex interaction is pictured in Figure 3. [57]. All these changes might contribute to the overall beneficial effects of verapamil use. [57]. Abbreviations: TXNIP, Thioredoxin interacting protein; IL32, interleukin 32; BCL2L2, Bcl-2-like pro-tein2; GP2, glycoprotein2; INSIG1, insulin-induced gene1; HLA, human leucocyte antigen; TXNRD1, Thioredoxin reductase; SRXN1, sulfiredoxin reductase; red arrow, represents upregulation by verapamil; green arrow, represents downregulation by verapamil. These reported beneficial findings have to be confirmed in larger studies and might improve diabetes control in subjects with T1D in the future [18,57]. Clinical Implications in Type 2 Diabetes (T2D) In a retrospective population-based cohort study from the Taiwan's National Health Insurance Research Database, regular oral verapamil use was associated with a decreased incidence of T2D in patients with unknown history of diabetes in comparison to a matched group of patients treated with other CCB with an adjusted hazard ratio 0.80 [6]. These positive findings are supported by the observational data analyses from the International Verapamil SR/Trandolapril (INVEST) studies, which revealed a lower risk for developing diabetes as well as the data derived from the study using the Reasons for Geographic and Racial Differences in Stroke (REGARDS) cohort, where lower fasting blood glucose levels were shown in patients using verapamil compared to subjects with diabetes without CCB [25][26][27]. The results of both mentioned observational studies highlight the positive effects of orally administrated verapamil as a potentially preventive agent in T2D development. Next to these preventive aspects, positive results regarding the inhibition of gluconeogenesis are reported in T2D patients, which contributes to improved glucose homeostasis in T2D individuals [62]. Next to the reported studies, which have shown a lower incidence of T2D in verapamiltreated subjects, Malayeri et al. could reveal positive effects in T2D subjects in a randomized, double-blind, placebo-controlled trial [33]. In this study, verapamil administration showed a better glycemic control by means of decrease of HbA1c, decrease of TXNIP expression and increased glucagon-like peptide-1 receptor (GLP1R) mRNA providing increasing β-cell survival [33]. Additional findings by Carbovale et al. revealed significantly lowered plasma glucose levels in verapamil-treated subjects with T2D [63]. On this account, verapamil may serve as an effective oral adjunct therapy in combination with oral antidiabetic drugs in T2D patients in the future as it is safe, improves glycemic control, and might preserve β-cells function as demonstrated in T1D and T2D mouse models [21]. These previously described positive effects of orally administrated verapamil, based on retrospective population-based and observational data analyses [6,[25][26][27] as well as the presented data of a randomized, double-blind, placebo-controlled trial [33], could elucidate the positive effects of TXNIP regulation on glucose metabolism. Additionally, positive findings were revealed by Hong et al. in mouse models of T2D, who demonstrated for the first time new mechanistic insights and novel links between TXNIP and proinflammatory cytokines and microRNA signaling [50]. Furthermore, latest research results by Wu et al. revealed positive effects of verapamil use in type 2 diabetic rats on bone mass, microstructure as well as macro-and nano mechanical properties of the femur [64]. Taken together these several positive effects emphasize the important role of TXNIP and its effects on the pancreatic β-cell and TXNIP expression in T2D and underline through various systemic and cellular effects its potential as an adjunctive therapeutic approach. Discussion Since loss of functional β-cell mass is one of the key aspects of diabetes in general, different therapeutical approaches have been established in past decades in order to halt this process [20]. Chronic increased intracellular Ca 2+ levels seem to contribute to impaired β-cell function and are associated with long term β-cell impairment. In this regard, excitotoxicity or overnutrition and the combination of both stresses seem to play an important role, as they might cause alterations in the β-cells transcriptome, mitochondrial energy metabolism, fatty acid β-oxidation, and mitochondrial biogenesis [65]. Next to the current physical activity recommendations of 150 min of moderateintensity aerobic exercise per week resulting in optimized glycemic control in individuals with diabetes [66,67], additional early oral verapamil usage has been reported to improve insulin-stimulated glucose transport in skeletal muscle, resulting in optimized glycemic control and improved insulin sensitivity. Next to the mentioned positive effects of orally administrated verapamil on the β-cell preservation and the improved glycemic control [31,48], several overall beneficial effects observed with verapamil have been illustrated [57]. In summary, in our opinion these far reaching cellular and systemic regulatory effects seem to contribute to the positive assessment of verapamil, referring to its impact on diabetes. Next to the regulating effects on the thioredoxin system [57], in individuals with diabetes, who are predisposed to micro-and macrovascular complications during their lifetime, the management of autoimmune-related injury has to be focused. Verapamil promotes by regulation of the thioredoxin system several antioxidative, anti-apoptotic and immunomodulatory interactions in the human pancreatic islets [57]. Current scientific evidence suggests that TXNIP-targeting therapeutics, such as verapamil, seem to play an important role as central regulators of whole-body glucose homeostasis [41]; nevertheless, the basic molecular mechanisms of how TXNIP interacts with other proteins in different cellular tissues is not fully understood. In this context, up to now the interaction between TXNIP and glucagon is not completely understood, but Thielen et al. could identify a novel orally substituted quinazoline sulfonamide, SRI-37330, with an excellent safety profile and inhibition of TXNIP in human islets, inhibition of glucagon function and secretion, lowering hepatic glucose production and strong anti-diabetic effects in a mouse model of T1D [68]. These reported findings on SRI-37330 are consistent with previous observations on TXNIP targeting by blockage of the L-type calcium channels with verapamil. These positive effects for verapamil have been shown in mouse models [19,43], in a randomized controlled trial in individuals with T1D [18], as well as the association with reduced incidence of newly diagnosed T2D [6,25,26,42] and better overall glycemic control in subjects with diabetes [27]. By the lack of validated clinical approaches for detecting insulitis and β-cell decline in T1D preclinical models to diagnose eventual diabetes and to monitor the efficacy of therapeutical interventions, ultrasound imaging of the pancreatic perfusion dynamics revealed delayed diabetes development by orally administrated verapamil [69]. These therapeutical strategies might provide a deployable future predictive marker for therapeutic prevention in asymptomatic T1D individuals [69]. Nevertheless, verapamil is a blood pressure medication and an anti-arrhythmic drug and its TXNIP capacity is linked to its function as L-type calcium channel blocker [68]. Therefore, in our opinion the daily administrated verapamil has to be limited to certain patient populations, especially those who tend to hypotension and left ventricular systolic dysfunction, suffer from hepatopathy or might be predisposed for potential polypharmacy drug interactions. These side effects might prohibit its regular clinical prescription. Other important points that have to be mentioned are the lack of data referring to the long-term application of verapamil, specifically in its indication as a diabetes-modifying drug. The present exploratory studies reveal some far-reaching systemic and cellular effects of verapamil treatment in the context of T1D [57]. Next to the described preservation of β-cell function in the pancreatic tissue, unappreciated positive connections between immune system, regulation of proinflammatory cytokines, lowering of CHGA in response to verapamil use were revealed and might help to dampen the associated autoimmune processes in T1D [57]. In our opinion, these interesting aspects contribute to the positive overall effects of verapamil in diabetes. Nevertheless, the current scientific studies were limited to small numbers of subjects. In this context, the VER-A-T1D trial (VER-A-T1D; NCT04545151) as a multicenter, randomized, double-blind, placebo-controlled study will evaluate the effect of orally administered verapamil on the preservation of β-cell function as measured by stimulated c-peptide levels after 12 months. Furthermore, another multinational trial investigates the use of verapamil in children and adolescents with newly diagnosed T1D to assess hybrid closed loop therapy and verapamil for β-cell preservation in new onset T1D (CLVer; NCT04233034), which was initiated in July 2020 and will be completed in September 2022. Nevertheless, the outcomes of both initiated studies and the presented scientific research in general are limited to a small number of participants and a short follow-up time with a lack of long-term follow-up results. Future research and longer follow-up periods will be of great interest for the scientific community, such as safety profile and side effects, as well as their daily practicability regarding the regular continuous verapamil use as a new innovative therapeutical approach. In the end the previously described positive effects of oral adjunct verapamil administration in subjects with T1D have to be confirmed in larger studies. Conclusions In conclusion, daily orally administrated CCB verapamil added early to standard therapy in diabetes, mainly T1D, might contribute to establishing an effective adjuvant T1D therapy. Inhibition of β-cells TXNIP expression seems to represent a new therapeutical approach for the future prevention and therapy of diabetes, while preserving and promoting the person's own endogenous β-cell function as well as optimizing overall glucose control by reducing exogenous insulin requirements and reducing hypoglycemic risk. Next to the mediated β-cell preservation, far-reaching positive systemic and cellular effects by daily orally administrated verapamil use seem to dampen the associated autoimmune processes in T1D. In patients with no history of diabetes mellitus, a decreased incidence of T2D could be revealed in observational data analyses compared to the usage of other CCB. This additional safe and effective novel approach might provide an adjunctive therapeutical treatment option in the future management of diabetes mellitus and has to be confirmed in further clinical investigation in larger patient cohorts.
2022-07-09T15:24:19.484Z
2022-07-06T00:00:00.000
{ "year": 2022, "sha1": "1ac68e60e88992703340d0bd34b2bbb63079e373", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-4540/3/3/30/pdf?version=1657086369", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4c5243b33ba9ff0a78813bec5e20340da7ad5b4d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
233025931
pes2o/s2orc
v3-fos-license
Azelaic Acid/Expanded Graphite Composites with High Latent Heat Storage Capacity and Thermal Conductivity at Medium Temperature A novel azelaic acid/expanded graphite (AA/EG) phase change composite (PCC) was fabricated as a shape-stabilized phase change material (PCM) for latent heat storage at medium temperatures. The composite exhibited a low supercooling degree and high heat storage capacity. Despite the impregnation of a high quantity of AA (85 wt %) in the porous network of EG, there was no leakage of liquid AA. This was attributed to the capillary forces and surface tension forces. The pure AA exhibited a melting temperature of 108.0 °C, with an intrinsically low supercooling degree of 5.8 °C. The melting temperature of AA in the PCC decreased slightly to 105.8 °C, and there was a significant decrease in the supercooling degree to 1.0 °C. The AA/EG PCC exhibited a high latent heat storage capacity of 162.5 J/g, and there was a significant gap between the decomposition temperature and the phase change temperature range. Therefore, the composite exhibited high thermal stability during operations. The results of an accelerated thermal cycling test (200 cycles) indicated the high cycling durability and chemical stability of the PCC. The thermal conductivity of AA increased by 15.7 times after impregnation in EG, as compared to that of the pure AA, and thus, thermal kinetics of the PCC was improved. The results of a heat storage/release test with 15 g of the PCM revealed that the melting and solidification of the AA/EG PCC were 5.0-fold and 7.4-fold faster, respectively, than those of the pure AA. This was attributed to the high thermal conductivity of the PCC. INTRODUCTION Thermal energy storage (TES) is an important strategy for the efficient utilization of thermal energy to alleviate the issue of fossil fuel shortage. 1 Latent heat storage (LHS), among the various TES technologies, has attracted significant attention owing to the high energy storage capacity for LHS systems; furthermore, the heat storage/release in such systems occurs at a defined temperature that corresponds to the phase change temperature. 2 The energy storage capacity for LHS systems is 5−14-fold higher than that for sensible heat storage systems at the same unit volume. 3 Phase change materials (PCMs) are employed as the storage media in LHS systems. The application of PCMs in the low-temperature range (T < 80°C ) has been extensively researched, and PCMs are commercially utilized worldwide for low-temperature applications. The application of PCMs in the medium-temperature region (80°C < T < 250°C) remains relatively unexplored despite its immense economic potential. 4 It has been determined that up to 5−6% of the annual energy consumption in Germany occurs at 100−300°C. The thermal energy in this temperature range is utilized not only for steam generation, hot/cold conditioning, and cooking but also in the textile, paper, and rubber industries. 5 Recently, there has been significant interest in the working of PCMs at 100−130°C for solar energy applications. 6,7 The utilization of organic and inorganic PCMs for mediumtemperature applications presents different advantages and disadvantages. 6,8 The optimal PCMs should be nontoxic, noncorrosive, and abundantly available; additionally, they should exhibit a high heat storage density, high cycling durability, and a low supercooling degree. 9 Hydroxides, nitrates, carbonates, and their eutectics are classified as inorganic PCMs. 5 They are thermally stable and inexpensive; however, their applications are limited by drawbacks such as inhomogeneous melting, phase separation, and corrosion. 4 Organic PCMs usually exhibit homogeneous melting, no phase separation, and little or no corrosion, 2 unlike the inorganic PCMs. However, the supercooling degree, which is defined as the difference between the melting and crystallization temperatures, of organic PCMs is high. Although most sugar alcohols (galactitol, mannitol, and erythritol) exhibit ultrahigh heat storage capacity, their applicability is hindered by the undesirably high supercooling degrees of up to 65°C . 10,11 Salicylic acid, benzanilide, and hydroquinone also exhibit high supercooling degrees of 13−48°C. 12 A eutectic mixture of adipic acid and sebacic acid exhibits a supercooling degree of 17°C. 13 The optimal supercooling degree of a PCM should be less than 5°C. 14 The addition of nucleating agents such as silver iodide (AgI), calcium pyrophosphate (Ca 2 P 2 O 7 ), aluminum phosphate (AlPO 4 ), and graphite foam reportedly alleviates the drawback of the high supercooling degree. 15 However, the combination with nucleating agents lowers the heat storage density of PCMs. This is attributed to not only the replacement of the PCM by the nucleating agents but also the unexpected interactions between the PCM and additives. The addition of only 6.5% of graphite foam induced a 13% decrease in the heat storage density of a eutectic mixture of galactitol and mannitol. 15 It is preferable to utilize materials with inherently low supercooling degrees so that no additional effort is required to lower the supercooling degree. The applications of PCMs are also limited by issues such as low thermal conductivity and pronounced liquid leakage, in addition to the high supercooling degree. 16 A widely used technique for the alleviation of these drawbacks is the impregnation of PCMs in porous matrixes to form phase change composites (PCCs). 17 The impregnated PCMs are confined in the pores of the matrixes by the capillary and surface tension forces, thereby preventing the leakage of the liquid PCMs. 18,19 Silica-based materials, such as silica gel 20 and silica fume, 21 carbon-based materials, such as expanded graphite (EG) 6,8,22 and carbon nanotube (CNT) sponge, 23 and silicate minerals, such as expanded perlite 24 and vermiculite, 25 are utilized as the supporting materials for PCCs. The lightweight and inexpensive EG is a promising supporting material owing to its high porosity, which ensures a high PCM content in the porous support and high thermal conductivity. Xia et al. 26 demonstrated that the impregnation of 93 wt % of paraffin in only 7 wt % of EG induced a significant increase (10 times) in the thermal conductivity of paraffin. Wang et al. 27 demonstrated the excellent shape stabilization of a sebacic acid/EG composite that was prepared with 85 wt % of the PCM. The thermal conductivity of the composite was 5.35 W/m·K, whereas that of pure sebacic acid was only 0.37 W/m·K. These results indicated the efficacy of EG as a porous support for medium-temperature PCMs. Azelaic acid [HOOC(CH 2 ) 7 COOH] (AA) is a dicarboxylic acid that is naturally found in wheat and barley. It is nonhazardous and suitable for cost-effective large-scale production. It is utilized in the medicine and polymer industries. 28,29 AA has been utilized as a PCM in only one study (1996) 30 to the best of our knowledge. The phase change temperature of AA was determined to be ∼107°C, which is suitable for medium-temperature applications. However, the other thermal properties of AA such as the LHS capacity, supercooling degree, thermal stability, and cycling durability have not been investigated. AA exhibits a high LHS capacity (∼202 J/g) and a relatively low supercooling degree (5.8°C). Therefore, it is necessary to conduct an in-depth evaluation of the suitability of AA as a mediumtemperature PCM. A series of novel AA/EG PCCs was prepared for mediumtemperature applications in the present study. AA was impregnated into the porous network of EG via evaporative impregnation. The optimal content of AA (85 wt %) in the pores of EG was determined by the leakage tests. The confinement of AA in the pores of EG lowered the supercooling degree of AA. The Fourier-transform infrared (FT-IR) spectra and X-ray diffraction (XRD) patterns for AA and the 85 wt % AA/EG PCC revealed the absence of chemical interactions between AA and the EG surface. The cycling durability was investigated over 200 accelerated thermal cycles to determine the chemical and physical stability of the PCC. The thermal conductivity of the PCC increased by 15.7 times as compared to that of the pure AA. The heat storage/release behaviors were investigated using a homemade apparatus. The melting and solidification rates of the PCC (15 g) were 5.0-fold and 7.4-fold higher, respectively, than those of the pure AA (15 g) (75 → 120 → 75°C). RESULTS AND DISCUSSION 2.1. Characterization of the AA/EG PCCs. Figure 1 shows the scanning electron microscopy (SEM) images of EG and the AA/EG PCCs. EG is a widely utilized porous support, and its porosity has been investigated in detail in other studies. 26,27,31,32 The EG particles ( Figure 1a) exhibited a worm-like structure, while the pores of EG ( Figure 1b filled with an increasing amount of bulk AA with the increase in the AA content to 90 wt % ( Figure 1e) and 95 wt % ( Figure 1f). Consequently, an excessive quantity of AA was present in the 90 and 95 wt % AA/EG PCCs. A leakage test was performed to investigate the shape stability of the AA/EG PCCs. The composites with various AA contents (80−95 wt %) were initially collected on filter papers; subsequently, they were placed in an oven at 130°C (∼20°C higher than the melting point of AA) for 60 min. Thereafter, the composites were removed from the filter papers and observed to detect the stains of AA ( Figure 2). The amount of the leaked AA from the 95 wt % AA/EG PCC was markedly high, whereas that from the 90 wt % AA/EG PCC was negligible. The 80 and 85 wt % AA/EG PCCs exhibited no leakage of AA, which indicated their excellent shape stability. EG effectively confined ∼85 wt % of AA via the capillary forces and surface tension forces, thereby preventing the leakage. These results were consistent with the other reports, where the maximum PCM content without leakage was determined to be 85−93 wt %. 18,26,27,31 The increase in the PCM content induced an increase in the TES capacity. Therefore, the composite with 85 wt % AA was selected as the optimal AA/ EG PCC for further investigations. The subsequent results in this work will be discussed with respect to the 85 wt % AA/EG PCC. The chemical compatibility of the AA/EG PCCs was investigated by FT-IR spectroscopy (Figure 3a). The FT-IR spectrum of the pure EG exhibited a broad band that was centered at 3409 cm −1 . This band was assigned to the −OH stretching mode from either the alcoholic/phenolic functional groups of EG or the surface-adsorbed water. 33,34 The peak at 1619 cm −1 was assigned to the C−O vibration mode. The broad peak within 3700−2300 cm −1 in the FT-IR spectrum of AA corresponded to the stretching vibrations of the −OH groups. The peaks at 2931 and 2854 cm −1 were assigned to the stretching vibrations of the −C−H bonds. The peaks at 1697, 1419, 1311, and 910 cm −1 were assigned to the stretching vibrations of the CO groups, the bending vibration of the −C−H bonds, stretching vibrations of the C−O groups, and the bending mode of the −OH groups, respectively. The characteristic peaks of AA overlapped with those of EG in the FT-IR spectrum of the AA/EG PCC. Moreover, no new peaks were detected. This indicated the physical compounding of EG and AA without the occurrence of chemical reactions. Therefore, EG and AA exhibited high chemical compatibility in the PCC. The crystallographic properties of EG, AA, and the 85 wt % AA/EG PCC were characterized by XRD (Figure 3b). The XRD pattern for EG presented one high-intensity peak at 2θ = 26.6°that corresponded to the characteristic (002) peak of graphite. Figure 3b shows the XRD patterns of the two crystalline phases of AA, that is, the αand β-forms. 35 The XRD pattern of the commercial form of AA (α) presented five high-intensity reflections that were centered at 2θ = 8.4, 19.0, 22.1, 23.5, and 28.2°. The α-form of AA transformed to the βform after melting and recrystallization. The XRD pattern for β-form AA presented five peaks at 2θ = 9.4, 18.6, 19.2, 23.0, and 27.2°. The XRD pattern for the 85 wt % AA/EG PCC exhibited all the characteristic peaks of EG and β-form AA. This was attributed to the melting and recrystallization of AA in the pores of EG during the preparation of the PCC. The low intensity of the peak at 9.4°was attributed to contrast matching between EG and the confined AA in the pore. 36 The absence of new peaks in the pattern indicated the physical combination of EG and AA without the occurrence of chemical reactions. T S ) are presented in Table 1. An endothermic peak and an exothermic peak were presented during melting and solidification, respectively, by both the pure AA and the PCC. The T M and T S of the pure AA were 108.0 and 102.2°C, respectively; thus, the pure AA exhibited a relatively low supercooling degree (ΔT) of 5.8°C. The PCC exhibited a lower T M (105.8°C) and a higher T S (104.8°C) as compared to those exhibited by the pure AA. Thus, the supercooling degree of the PCC (1.0°C) was lower than that of the pure AA. This was attributed to the fact that the inner surface of EG functioned as a heterogeneous nucleation center to not only accelerate the crystallization but also decrease the particle size of AA during the crystallization. 31,37 The LHS capacities, that is, the ΔH M and ΔH S of the pure AA were 202.0 and 201.2 J/g, respectively. The ΔH M and ΔH S of the PCC were 162.5 and 162.2 J/g, respectively. The phase change latent heat of the PCC was lower than that of the pure AA owing to the presence of EG with no latent heat. The phase change latent heat might also have been lowered by the confinement effects that suppressed the crystallization of the confined PCM. The impact of the confinement on the crystallinity of the PCM was evaluated by calculating the crystallization fraction (F) using the following equation (eq 1) where ΔH M,PCC and ΔH S,PCC are the melting and solidifying enthalpies of the PCC, respectively, and x is the relative mass fraction of the PCM in the composite. The crystallization fraction of the 85 wt % AA/EG PCC was as high as 94.7%. The confinement of AA in the porous network of EG exerted a negligible effect on the crystallinity of AA. Therefore, the PCC retained a high crystallinity that optimized its LHS capacity. The melting LHS capacity (ΔH M ) and the supercooling degree (ΔT) of the 85 wt % AA/EG PCC were compared to those of the other medium-temperature PCCs in the previous studies ( Table 2). The ΔH M and ΔT of the 85 wt % AA/EG PCC were higher and lower, respectively, than those of wt % AA/EG PCC was lower than that of erythritol/EG and erythritol−mannitol/EG. However, the ΔT of the 85 wt % AA/EG PCC was significantly lower than that of erythritol/EG and erythritol−mannitol/EG. The ΔH M and ΔT of the 85 wt % AA/EG PCC were comparable to those of sebacic acid/EG and sebacic acid/CNT sponge. The results indicated that the 85 wt % AA/EG PCC in the present study exhibited an optimal LHS capacity and supercooling degree. 2.3. Thermal Stability of the AA/EG PCC. The thermal stabilities of the pure AA and the AA/EG PCC were examined by thermogravimetric analysis (TGA) ( Figure 5). The TGA curves at 400°C revealed the approximately 100% weight loss for the pure AA. The 85 wt % AA/EG PCC exhibited a weight loss of 85.6%, which was similar to the AA content. AA was uniformly impregnated in the EG matrix. The pure AA exhibited onset and endset decomposition temperatures of 228−274°C. The 85 wt % AA/EG PCC exhibited onset and endset decomposition temperatures of 256−321°C. The higher thermal stability of the PCC as compared to that of the pure AA was attributed to the interactions, such as capillary forces and surface tension forces, between AA and the pore surfaces of EG. 36 Furthermore, the absence of decomposition at temperatures close to the phase change temperature of the Figure 6a and Table 1, respectively. The T M and T S of the as-prepared PCC were 105.8 and 104.8°C, respectively. The T M and T S of the PCC after 200 cycles were 105.9 and 105.0°C, respectively. The change in the phase change temperatures after 200 heating/cooling cycles was insignificant. The LHS capacities of the as-prepared PCC were compared with those of the PCC after the cycling test. The ΔH M and ΔH S of the as-prepared PCC were 162.5 and 162.2 J/g, respectively. The ΔH M and ΔH S of the PCC after 200 cycles were 158.4 and 158.0 J/g, respectively. The minor variation (2.6%) in the ΔH M and ΔH S after 200 heating/ cooling cycles indicated the high durability of the AA/EG PCC. The chemical stability of the composite that was subjected to multiple thermal cycles was determined by FT-IR spectroscopy (Figure 6b). There were no significant differences in the peak positions, peak intensities, and absorption band shapes for the as-prepared and cycled composites. This indicated the high chemical stability of the PCC. It was concluded that the AA/EG PCC exhibited high cycling durability and chemical stability for long-term operations. 2.5. Thermal Conductivity of the AA/EG PCC. The thermal conductivities of the pure AA and the AA/EG PCC were determined using the transient plane source method at 25°C (Figure 7). The pure AA exhibited a thermal conductivity of 0.21 W/m·K, which was considered to be low for an organic PCM. The thermal conductivity of the PCC was 3.25 W/m·K; therefore, the thermal conductivity increased by 15.7 times as compared to that of the pure AA. This was attributed to the presence of EG with high thermal conductivity. The presence of 3−20 wt % EG increased the thermal conductivities of other EG-based PCCs ( Table 2) by 5−30 times. The substantial increment (15.7 times) in the thermal conductivity of the AA/ EG PCC owing to the presence of 15 wt % EG was consistent with the increment that was observed for the other studies on EG-based PCCs (refer to Table 2). The improvement in the thermal conductivity optimized the heat-transfer rate of the PCC. This resulted in the increase in the thermal performance of the PCC. 2.6. Heat Storage/Release Characteristics of the AA/ EG PCC. The study of the heat storage/release characteristics is critical for the determination of the thermal performance of a PCC. The experimental setup of the heat storage/release test is shown in Figure 9, and the time-dependent variations in the temperature are presented in Figure 8. The heat storage/ release properties of the PCC were superior to those of the pure AA. The results were determined using a tangential method. The PCC and the pure AA required 342 and 1702 s, respectively, to completely melt during the heat storage. The PCC and the pure AA required 156 and 1156 s, respectively, to completely solidify during the heat release. The heat storage and release of the AA/EG PCC were 5.0-fold and 7.4-fold faster, respectively, than those of the pure AA. The composite exhibited excellent heat storage/release properties owing to its high thermal conductivity. The thermal performance of the AA/EG PCC was optimized by the fast heat storage and release. CONCLUSIONS A novel PCC, with AA as the PCM and EG as the supporting matrix, was prepared by evaporative impregnation for mediumtemperature utilization. The optimal impregnation capacity of EG for AA, at which there was no leakage of liquid AA, was 85 wt %. The pure AA exhibited a ΔH M and a ΔH S of 202.0 and 201.2 J/g, respectively. It presented a T M and a T S of 108.0 and 102.2°C, respectively, thereby resulting in a supercooling degree of 5.8°C. The 85 wt % AA/EG PCC exhibited a ΔH M and a ΔH S of 162.5 and 162.2 J/g, respectively. It presented a T M and a T S of 105.8 and 104.8°C, respectively, thereby resulting in a supercooling degree of only 1.0°C. The LHS capacity and supercooling degree of the AA/EG PCC were higher and lower, respectively, than those of most of the previously studied PCCs. The thermal stability of the PCC was higher than that of the pure AA. This was attributed to the capillary forces and surface tension forces between AA and EG. Furthermore, the PCC exhibited high cycling durability and chemical stability after 200 heating/cooling cycles. The thermal conductivity of the PCC was 3.25 W/m·K; that is, it increased by 15.7 times as compared to that of the pure AA owing to the presence of EG with high thermal conductivity. The results of the heat storage/release test indicated that the heat storage and release rates of the PCC were 5.0-fold and 7.4-fold higher, respectively, than those of the pure AA. Therefore, the AA/EG PCC exhibited high potential for application as a medium-temperature heat storage medium owing to its excellent thermochemical characteristics. 4.3. Characterization Methods. The microstructures and morphologies of EG and the AA/EG PCCs were observed by field emission SEM (JSM-6701F, Jeol Ltd., Tokyo, Japan). The chemical compositions of AA, EG, and the 85 wt % AA/EG PCC were determined by FT-IR spectroscopy (Nicolet 6700, Thermo Fisher Scientific, Massachusetts, USA). The FT-IR spectra were recorded in the transmittance mode with KBr pellets over a wavenumber range of 400−4000 cm −1 . The crystalline phases in AA, EG, and the 85 wt % AA/EG PCC were analyzed by XRD (Miniflex, Rigaku Corporation, Tokyo, Japan). The XRD patterns were obtained using Cu Kα radiation with a current, a testing voltage, a scanning rate, and a 2θ range of 15 mA, 40 kV, 5°/min, and 5−50°, respectively. MATERIALS AND METHODS The leakage of the materials was tested by the following procedure: first, the samples were collected on filter papers; subsequently, they were placed in an oven at 130°C (∼20°C higher than the melting point of AA) for 60 min. Finally, the composites were removed from the filter papers and carefully observed to detect the stains of AA. The phase change characteristics of AA and the 85 wt % AA/EG PCC were obtained by DSC (DSC 4000, PerkinElmer, Inc., Massachusetts, USA). The experiments were performed at 50−130°C with a heating rate of 5°/min under a N 2 purge of 20 mL/min. The instrument was calibrated with standard indium and zinc before the experiments. The phase change temperatures were determined at the onset temperatures. The latent heat was obtained by integrating the area of the phase change peak. The thermal stabilities of AA and the 85 wt % AA/EG PCC were evaluated by TGA (TGA 4000, PerkinElmer, Inc., Massachusetts, USA). TGA was performed at 30−450°C with a temperature ramp rate of 10°C/min. The cycling durability of the 85 wt % AA/EG PCC was tested over 200 heating/cooling cycles. A sample (1 g) was placed in a glass vial that was subsequently cycled between two temperature-controlled oil baths (25 ↔ 140°C). The dwell time in each bath was 5 min, which was sufficient for the tested sample to reach the temperature of the bath. The thermal conductivity of the 85 wt % AA/EG PCC was determined using the transient plane source method (TPS 3500, Hot Disk AB, Goẗeborg, Sweden) at 25°C. The measurement was performed four times for each sample to obtain an average result. The PCC was compressed into two round blocks, with each having dimensions of 30 mm × 10 mm and a weight of 5.6 g, using a homemade mold and compressor. The pure AA was melted and poured into the mold to obtain two round blocks of identical sizes; thus, the thermal conductivity was measured. The heat storage/release properties of the pure AA and the 85 wt % AA/EG PCC were investigated with a homemade apparatus (Figure 9) using the technique described in ref 40. The PCC (15 g) was compressed in a heat storage unit (30 mm × 150 mm) so that its density was as similar as those of the round blocks of the PCC in the thermal conductivity test. The pure AA (15 g) was first introduced into the heat storage unit and then melted and recrystallized prior to a heat storage/release measurement. A T-type thermocouple and a data acquisition unit (MV200, Yokogawa Electric Corporation, Tokyo, Japan) were employed to monitor the temperature changes during the tests. The heat storage unit was initially immersed in a low-temperature oil bath (75°C) until it reached the bath temperature. Subsequently, the unit was quickly transferred to a hightemperature oil bath (120°C), and the temperature variations during the heat storage (melting) were recorded. When the temperature of the unit reached 120°C for a certain time, the unit was quickly transferred to the low-temperature oil bath. The temperature changes during the heat release (cooling) were monitored.
2021-04-06T05:23:02.421Z
2021-03-16T00:00:00.000
{ "year": 2021, "sha1": "ec999a77dc17eb4edcdd3c515e2c3cb2c8ff3d50", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c00265", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec999a77dc17eb4edcdd3c515e2c3cb2c8ff3d50", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }